From nyarly@REDACTED Tue Dec 1 02:06:50 2015 From: nyarly@REDACTED (Judson Lester) Date: Tue, 01 Dec 2015 01:06:50 +0000 Subject: [erlang-questions] Trimming log output Message-ID: I'm using the dbg module to work through some knotty issues in integration tests (as I feel out how to reduce said issues...) and in the meantime I wonder if there's a way to explain to dbg or ~p that I don't need all the details of a re_pattern in my output. Is there something like marking a type as opaque but for log output? Am I better off trying to do some kind of post-processing? Judson -------------- next part -------------- An HTML attachment was scrubbed... URL: From khitai.pang@REDACTED Tue Dec 1 11:33:02 2015 From: khitai.pang@REDACTED (Khitai Pang) Date: Tue, 1 Dec 2015 18:33:02 +0800 Subject: [erlang-questions] Implementing an erlang server to host organizational data Message-ID: Hi, I am writing an erlang server to host organizational data, e.g. the hierarchy of a company. Every organization has a root node (the top branch), a branch node can have zero or multiple child nodes, a child node can be an employee node or a branch node, an employee node has no children... Basically, it's tree-like data. All organization data is hosted on the erlang server, every client has it's local copy of the data. A client (mobile app) connects to the erlang server to operate on the data (create an organization, add a branch, add an employee, rename a branch, add a subbranch to a branch, etc.), the change of the tree should be instantly synced to all clients. Out of pure passion, I decided to do this with erlang. I am new to erlang, new like couple of weeks. How to store the data on disk? Mnesia? How to organize the data? Does erlang has some OTP-fu to handle tree data? Does erlang has a way to do mutex? How do erlang server and clients talk to do data syncing? Can the syncing be done by communicating with raw Erlang terms? Sorry to ask so many questions, some may be off-topic in an mailing list for Erlang the programming language. I really need some guidance before start working on it. And I want to know in what way erlangers' would tend to implement this. Thanks Khitai From montuori@REDACTED Tue Dec 1 15:50:45 2015 From: montuori@REDACTED (Kevin Montuori) Date: Tue, 01 Dec 2015 08:50:45 -0600 Subject: [erlang-questions] Implementing an erlang server to host organizational data In-Reply-To: (Khitai Pang's message of "Tue, 1 Dec 2015 18:33:02 +0800") References: Message-ID: >>>>> "kp" == Khitai Pang writes: kp> I am writing an erlang server to host organizational data, e.g. the kp> hierarchy of a company. kp> How to store the data on disk? Mnesia? How to organize the kp> data? When faced with these requirements, I'd use an LDAP directory for the on-disk storage. There are any number of roll-you-own solutions but for a commercial (read "real world") application it's hard to beat LDAP. The eldap application will get you going. Should you require a polyglot solution you'll find there are few languages that can't handle LDAP. k. -- Kevin Montuori montuori@REDACTED From seancribbs@REDACTED Tue Dec 1 16:31:22 2015 From: seancribbs@REDACTED (Sean Cribbs) Date: Tue, 1 Dec 2015 09:31:22 -0600 Subject: [erlang-questions] Implementing an erlang server to host organizational data In-Reply-To: References: Message-ID: On Tue, Dec 1, 2015 at 4:33 AM, Khitai Pang wrote: > > How to store the data on disk? Mnesia? How to organize the data? Does > erlang has some OTP-fu to handle tree data? Does erlang has a way to do > mutex? How do erlang server and clients talk to do data syncing? Can the > syncing be done by communicating with raw Erlang terms? > > There are many ways to store on-disk (although I like Kevin's LDAP idea best). Mnesia is neat but might not be what you want if your goal is to sync with clients. There are a number of key-value storage engines that could work, or you could just the entire data structure write to a file occasionally. For the data you are modeling, records or maps seems appropriate (although, I'm wondering how a branch of the org tree doesn't have a manager where the branch's children are her direct reports). If it were records, I might do it like so: -record(branch, {children=[] :: #employee{} | #branch{}}). -record(employee, {name :: string(), title :: string()}). >From that it's straightforward to do depth-first traversal of the tree: traverse(#branch{children=C}, Fun) -> lists:foreach(fun(R) -> traverse(R, Fun) end, C); traverse(#employee{}=E, Fun) -> Fun(E). "Mutex" is something that doesn't really exist in Erlang because there's no direct concept of shared memory. Instead, I'd send all mutations (and possibly reads) of a shared structure through a single process whose only purpose is to serialize access to the state of the structure. This is a very common pattern. If you can tolerate reads being unserialized ("dirty" or "stale"), put the state in an ets table that has public reads. This works well in high-read, low-update scenarios. Serialized Erlang terms are great, but not incredibly interoperable. I'd go with a more commonly used format. For text, XML or JSON, for binary or more compact representations, I'd use msgpack, Protocol Buffers, or ASN.1. "Syncing" is a whole other topic that requires a primer on distributed systems that can't fit in this email. I suggest reading http://book.mixu.net/distsys/ when you have the time. Even implementing a simple best-effort delivery is full of deep potholes, not to mention handling concurrent updates. Sorry to ask so many questions, some may be off-topic in an mailing list > for Erlang the programming language. I really need some guidance before > start working on it. And I want to know in what way erlangers' would tend > to implement this. > > > Thanks > Khitai > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From garry@REDACTED Tue Dec 1 19:38:46 2015 From: garry@REDACTED (Garry Hodgson) Date: Tue, 01 Dec 2015 13:38:46 -0500 Subject: [erlang-questions] avoiding duplicate headers in cowboy Message-ID: <565DE936.9070400@research.att.com> I've got an application where I'm using cowboy to build what is, in effect, a web proxy. It accepts http/s requests, makes some decisions about whether and where to forward, then makes its own request of the final endpoint and returns results. it was originally written in webmachine and then ported to cowboy. everything works fine, but for one small nit. it appears that cowboy automagically adds its own headers to the returned results, which causes duplicate headers for server, content-type, and content-length, as seen below. is there some way to avoid this? $ curl -X GET http://localhost:8080/v0/log/logs?maxrecords=1 -k -H "$Creds" -v * About to connect() to localhost port 8080 (#0) * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 8080 (#0) > GET /v0/log/logs?maxrecords=1 HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.19.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Host: localhost:8080 > Accept: */* > Authorization: Bearer bc18fb0f-4b1f-493e-823c-fbefa0383e91 > < HTTP/1.1 200 OK < connection: keep-alive < server: Cowboy < date: Tue, 01 Dec 2015 17:34:29 GMT < content-length: 71 < content-type: application/json < connection: Keep-Alive < date: Tue, 01 Dec 2015 17:34:30 GMT < server: AT&T SECWEB 2.1.0 < content-length: 71 < content-type: application/json < x-powered-by: PHP/5.4.28 < keep-alive: timeout=5, max=100 < * Connection #0 to host localhost left intact * Closing connection #0 {"error":"Not found","error_description":"Could not read token in CTS"} -- Garry Hodgson Lead Member of Technical Staff AT&T Chief Security Office (CSO) "This e-mail and any files transmitted with it are AT&T property, are confidential, and are intended solely for the use of the individual or entity to whom this e-mail is addressed. If you are not one of the named recipient(s) or otherwise have reason to believe that you have received this message in error, please notify the sender and delete this message immediately from your computer. Any other use, retention, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited." From essen@REDACTED Tue Dec 1 19:43:35 2015 From: essen@REDACTED (=?UTF-8?Q?Lo=c3=afc_Hoguin?=) Date: Tue, 1 Dec 2015 19:43:35 +0100 Subject: [erlang-questions] avoiding duplicate headers in cowboy In-Reply-To: <565DE936.9070400@research.att.com> References: <565DE936.9070400@research.att.com> Message-ID: <565DEA57.9030601@ninenines.eu> I'll make a guess here: you are passing header names as list to Cowboy? The correct type is binary. Dialyzer would probably tell you that. On 12/01/2015 07:38 PM, Garry Hodgson wrote: > I've got an application where I'm using cowboy to build what is, > in effect, a web proxy. It accepts http/s requests, makes some > decisions about whether and where to forward, then makes its > own request of the final endpoint and returns results. it was > originally written in webmachine and then ported to cowboy. > > everything works fine, but for one small nit. it appears that > cowboy automagically adds its own headers to the returned > results, which causes duplicate headers for server, content-type, > and content-length, as seen below. > > is there some way to avoid this? > > $ curl -X GET http://localhost:8080/v0/log/logs?maxrecords=1 -k -H > "$Creds" -v > * About to connect() to localhost port 8080 (#0) > * Trying 127.0.0.1... connected > * Connected to localhost (127.0.0.1) port 8080 (#0) > > GET /v0/log/logs?maxrecords=1 HTTP/1.1 > > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 > NSS/3.19.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > > Host: localhost:8080 > > Accept: */* > > Authorization: Bearer bc18fb0f-4b1f-493e-823c-fbefa0383e91 > > > < HTTP/1.1 200 OK > < connection: keep-alive > < server: Cowboy > < date: Tue, 01 Dec 2015 17:34:29 GMT > < content-length: 71 > < content-type: application/json > < connection: Keep-Alive > < date: Tue, 01 Dec 2015 17:34:30 GMT > < server: AT&T SECWEB 2.1.0 > < content-length: 71 > < content-type: application/json > < x-powered-by: PHP/5.4.28 > < keep-alive: timeout=5, max=100 > < > * Connection #0 to host localhost left intact > * Closing connection #0 > {"error":"Not found","error_description":"Could not read token in CTS"} > -- Lo?c Hoguin http://ninenines.eu Author of The Erlanger Playbook, A book about software development using Erlang From garry@REDACTED Tue Dec 1 19:46:24 2015 From: garry@REDACTED (Garry Hodgson) Date: Tue, 01 Dec 2015 13:46:24 -0500 Subject: [erlang-questions] single packet authorization in erlang? Message-ID: <565DEB00.7030307@research.att.com> I'm working on an application that uses Single Packet Authorization (SPA) to protect access to our servers. We currently use fwknop client/daemon, but it's fairly heavyweight (and slow) for our purposes. I was wondering if anyone else is playing with this at all? At the least, I'd like to be able to create and send the SPA packet without using the fwknop library/client. It seems like it should be straighforward, except that I haven't found a decent description of what the packet needs to look like. It also seems like implementing the server side should be doable, perhaps using michael santos' pcap modules. Any pointers or guidance would be appreciated. Thanks -- Garry Hodgson Lead Member of Technical Staff AT&T Chief Security Office (CSO) "This e-mail and any files transmitted with it are AT&T property, are confidential, and are intended solely for the use of the individual or entity to whom this e-mail is addressed. If you are not one of the named recipient(s) or otherwise have reason to believe that you have received this message in error, please notify the sender and delete this message immediately from your computer. Any other use, retention, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited." From garry@REDACTED Tue Dec 1 19:53:58 2015 From: garry@REDACTED (Garry Hodgson) Date: Tue, 01 Dec 2015 13:53:58 -0500 Subject: [erlang-questions] avoiding duplicate headers in cowboy In-Reply-To: <565DEA57.9030601@ninenines.eu> References: <565DE936.9070400@research.att.com> <565DEA57.9030601@ninenines.eu> Message-ID: <565DECC6.8000607@research.att.com> that was it. thanks! On 12/01/2015 01:43 PM, Lo?c Hoguin wrote: > I'll make a guess here: you are passing header names as list to > Cowboy? The correct type is binary. Dialyzer would probably tell you > that. > > On 12/01/2015 07:38 PM, Garry Hodgson wrote: >> I've got an application where I'm using cowboy to build what is, >> in effect, a web proxy. It accepts http/s requests, makes some >> decisions about whether and where to forward, then makes its >> own request of the final endpoint and returns results. it was >> originally written in webmachine and then ported to cowboy. >> >> everything works fine, but for one small nit. it appears that >> cowboy automagically adds its own headers to the returned >> results, which causes duplicate headers for server, content-type, >> and content-length, as seen below. >> >> is there some way to avoid this? >> >> $ curl -X GET http://localhost:8080/v0/log/logs?maxrecords=1 -k -H >> "$Creds" -v >> * About to connect() to localhost port 8080 (#0) >> * Trying 127.0.0.1... connected >> * Connected to localhost (127.0.0.1) port 8080 (#0) >> > GET /v0/log/logs?maxrecords=1 HTTP/1.1 >> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 >> NSS/3.19.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2 >> > Host: localhost:8080 >> > Accept: */* >> > Authorization: Bearer bc18fb0f-4b1f-493e-823c-fbefa0383e91 >> > >> < HTTP/1.1 200 OK >> < connection: keep-alive >> < server: Cowboy >> < date: Tue, 01 Dec 2015 17:34:29 GMT >> < content-length: 71 >> < content-type: application/json >> < connection: Keep-Alive >> < date: Tue, 01 Dec 2015 17:34:30 GMT >> < server: AT&T SECWEB 2.1.0 >> < content-length: 71 >> < content-type: application/json >> < x-powered-by: PHP/5.4.28 >> < keep-alive: timeout=5, max=100 >> < >> * Connection #0 to host localhost left intact >> * Closing connection #0 >> {"error":"Not found","error_description":"Could not read token in CTS"} >> > -- Garry Hodgson Lead Member of Technical Staff AT&T Chief Security Office (CSO) "This e-mail and any files transmitted with it are AT&T property, are confidential, and are intended solely for the use of the individual or entity to whom this e-mail is addressed. If you are not one of the named recipient(s) or otherwise have reason to believe that you have received this message in error, please notify the sender and delete this message immediately from your computer. Any other use, retention, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited." From ok@REDACTED Wed Dec 2 05:06:15 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Wed, 2 Dec 2015 17:06:15 +1300 Subject: [erlang-questions] Updates, lenses, and why cross-module inlining would be nice In-Reply-To: <565CA6AC.9020306@gmail.com> References: <1448544949.1214768.450737385.156BF7D2@webmail.messagingengine.com> <5657EA88.1010901@gmail.com> <1462F19A-19B6-4A16-A26D-4643BC0830CB@cs.otago.ac.nz> <565CA6AC.9020306@gmail.com> Message-ID: <32BC6909-E775-48B0-98EB-D89716098F61@cs.otago.ac.nz> On 1/12/2015, at 8:42 am, Michael Truog wrote: > The header approach is preferable to make the dependency problems > a compile-time concern, rather than creating a runtime failure when > attempting a module update. It is important to not crash running source > code due to errors. It *would* be "preferable to make the dependency problems a compile-time concern". But using headers DOES NOT DO THAT. Nor does the import_static approach "crash running source code due to errors." import_static has two purposes: (1) to permit cross-module inlining and type inference safely; (2) to detect a clashing update *before* changing the state of the system in order to *avoid* crashing running source code. It is precisely the header approach which *creates* the potential for runtime problems due to version skew issues NOT BEING NOTICED. For what it's worth, I have a version of lens.erl as lens.hrl. It's not a pretty sight. There's heavy use of a ?LET macro to try to avoid variable capture, multiple evaluation, and so on. It relies a lot on the compiler inlining (fun (X, ...) Body end)(E, ...) into (X' = E, ..., Body, kill X'...) which, sadly, it does not appear to do. > The point of this, is to show the same source code can be used, with > inlining, and that the potential for breakage can be handled with testing > all the functions. A lens interface should not change a whole lot, so it > can be a dependable interface that is trusted. Nobody is suggesting that import_static should be COMMON or used with frequently changed modules, only that something a bit more disciplined and rather safer than a horrible mass of -defines woukd be nice. >> >> RIGHT NOW you can get mysterious errors that go away >> when you recompile things, due to the use of headers. >> import_static doesn't increase the problems, or the >> amount of work you have to do to fix them, or the nature >> of that work. All it does is give the system a chance >> to *notice* that the problem already exists. > Yes. I believe this doesn't become a concern when tests are provided. If the modules in question are tested *separately*, the tests don't help. If you test the modules *together*, you might as well *reload* them together. From ok@REDACTED Wed Dec 2 05:36:27 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Wed, 2 Dec 2015 17:36:27 +1300 Subject: [erlang-questions] Updates, lenses, and why cross-module inlining would be nice In-Reply-To: <32BC6909-E775-48B0-98EB-D89716098F61@cs.otago.ac.nz> References: <1448544949.1214768.450737385.156BF7D2@webmail.messagingengine.com> <5657EA88.1010901@gmail.com> <1462F19A-19B6-4A16-A26D-4643BC0830CB@cs.otago.ac.nz> <565CA6AC.9020306@gmail.com> <32BC6909-E775-48B0-98EB-D89716098F61@cs.otago.ac.nz> Message-ID: I suppose I should explain some of the things that trouble me about using headers (.hrl files with lots of -defines) for, well, just about anything. (1) Late syntax checking. -define(foo(F,G), fun (X) -> F(G(X)) end). passes all the checks that could be done at -define time. But ?foo(1,2) isn't sensible. This is much less of a problem in Lisp. (2) Even when a -define *could* in principle be given a type that could be checked, there is no way to actually do this. OK, for much of its life Erlang didn't have any types anyway. (3) INADVERTENT variable capture. This is the one that caused me a lot of grief in Lisp, but at least in Lisp I could do (let ((X (gensym)) (Y (gensym))) `(....macro body using ,X and ,Y as variables)) This is a problem that Scheme "hygienic macros" solved. It's a problem that could be addressed with a slight change to -define. Let -define([Variables], Pattern, Replacement). mean do what -define(Pattern, Replacement) would do EXCEPT that each identifier in Variables is replaced by a new variable throughout the replacement. This was, for example, a real problem for me while writing lens.hrl, and I am far from sure that I've got it right. (4) INADVERTENT variable export. I have no particular wish to change the way variable scope works in Erlang. But when you are writing macros, and you want to ensure that an expression is not evaluated twice, and you write case Expr of Var -> Body end then it _is_ a pain in the proctologist's area of expertise to be told about Var being exported, only matched by the inadvertent variable capture issue when you're *not* warned... None of these problems applies to function inlining. estions From charles@REDACTED Wed Dec 2 00:31:52 2015 From: charles@REDACTED (Charles Weitzer) Date: Tue, 1 Dec 2015 23:31:52 +0000 Subject: [erlang-questions] Senior Software Engineer Needed for Machine Learning Group Message-ID: The Voleon Group is a quantitative hedge fund located in Berkeley, California. We would like to hire a senior software engineer as soon as possible. Voleon's founders previously worked together at one of the most successful quantitative hedge funds in the world. Our CEO has a PhD in Computer Science from Stanford and has been CEO and founder of a successful Internet infrastructure startup. Our Chief Investment Officer has a PhD in Statistics from Berkeley. Voleon's team includes PhD's from leading departments in statistics, computer science, and mathematics. We have made several unpublished advances in the field of machine learning and in other areas as well. Here is our formal job description: ********************************************************** * Senior Software Engineer * Technology-driven investment firm employing cutting-edge statistical machine learning techniques seeks an exceptionally capable software engineer. You will architect and implement new production trading systems, machine learning infrastructure, data integration pipelines, and large-scale storage systems. The firm researches and deploys systematic trading strategies designed to generate attractive returns without being dependent on the performance of the overall market. Join a team of under 30 people that includes a Berkeley statistics professor as well as over ten PhD's from Berkeley, Chicago, CMU, Princeton, Stanford, and UCLA, led by the founder and CEO of a successful Internet infrastructure technology firm. The firm's offices are walking distance from BART and the UC Berkeley campus in downtown Berkeley, California. We have a casual and collegial office environment, weekly catered lunches, and competitive benefits packages. We seek candidates with a proven track record of writing correct, well-designed software, solving hard problems, and delivering complex projects on time. You should preferably have experience designing and implementing fault-tolerant distributed systems. Experience with building large-scale data infrastructure, stream processing systems, or latency-sensitive programs is a bonus. We are growing rapidly. Willingness to take initiative and a gritty determination to productize are essential. Required experience: - experience with functional programming languages such as Erlang, Haskell, etc. - developing with C/C++/Python/Go in a Linux environment with a focus on performance, concurrency, and correctness. - working in TCP/IP networking, multi-threading, and server development. - working with common Internet protocols (IP, TCP/UDP, SSL/TLS, HTTP, SNMP, etc.). - architecting and designing highly available systems. - architecting and designing large-scale data management infrastructure. - working in large codebases and building modular, manageable code. Preferred experience: - debugging and performance profiling, including the use of tools such as strace, valgrind, gdb, tcpdump, etc. - working with build and test automation tools. - working with well-defined change management processes. - diagnosing RDBMS performance problems, exploiting indexing, using EXPLAIN PLAN, optimizing at the code layer, etc. - working with messaging queues (RabbitMQ, Redis, etc.) as well as distributed caching systems. Interest in financial applications is essential, but experience in finance is not a primary factor in our hiring. Benefits and compensation are highly competitive. ********************************************************** The above job description is just a starting point in terms of possible duties and seniority. We can be very flexible for the right person. If you are interested, please apply with your full and complete CV: http://voleon.com/apply/ The Voleon Group is an Equal Opportunity employer. Applicants are considered without regard to race, color, religion, creed, national origin, age, sex, gender, marital status, sexual orientation and identity, genetic information, veteran status, citizenship, or any other factors prohibited by local, state, or federal law. -------------- next part -------------- An HTML attachment was scrubbed... URL: From flagg.abe@REDACTED Wed Dec 2 00:36:14 2015 From: flagg.abe@REDACTED (Alberto Rosso) Date: Wed, 2 Dec 2015 00:36:14 +0100 Subject: [erlang-questions] exec-port crashing Message-ID: Hello guys, our erlang application is running several c nodes and shell scripts, handled by erlexec. For some reason i'm going to debug deeper, the port exec-port crashes when decoding a string received from the erlang side. This issue brought a couple of questions to light. Isn't it bad to use exec-port as the unique gateway for all of my c nodes? I mean, a single crash of the exec-port frustrates the advantages of the supervision tree we've designed to make the application robust to hardware faults. Another doubt is about the possibility for exec-port to become a performance bottleneck of the whole application, as the number of c nodes (and their throughput from/to the erlang app) is growing. Could be starting more than one instance of the exec application, an (awful) solution? Your thoughts are really appreciated. -- Alberto Rosso save a tree: please don't print this email unless strictly necessary L'esperienza e la filosofia che non generano l'indulgenza e la carit? sono due acquisti che non valgono quello che costano. (Alexandre Dumas figlio) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjtruog@REDACTED Wed Dec 2 07:19:43 2015 From: mjtruog@REDACTED (Michael Truog) Date: Tue, 01 Dec 2015 22:19:43 -0800 Subject: [erlang-questions] Updates, lenses, and why cross-module inlining would be nice In-Reply-To: <32BC6909-E775-48B0-98EB-D89716098F61@cs.otago.ac.nz> References: <1448544949.1214768.450737385.156BF7D2@webmail.messagingengine.com> <5657EA88.1010901@gmail.com> <1462F19A-19B6-4A16-A26D-4643BC0830CB@cs.otago.ac.nz> <565CA6AC.9020306@gmail.com> <32BC6909-E775-48B0-98EB-D89716098F61@cs.otago.ac.nz> Message-ID: <565E8D7F.5030106@gmail.com> On 12/01/2015 08:06 PM, Richard A. O'Keefe wrote: > On 1/12/2015, at 8:42 am, Michael Truog wrote: >> The header approach is preferable to make the dependency problems >> a compile-time concern, rather than creating a runtime failure when >> attempting a module update. It is important to not crash running source >> code due to errors. > It *would* be "preferable to make the dependency problems > a compile-time concern". But using headers DOES NOT DO THAT. The lens.hrl created from your lens.erl file does due to containing only functions, that get dropped into the module that includes the header file, making them look like local functions within the module. The nowarn_unused_function compile directive was used to make sure unused functions don't generate warnings, so it acts like the opposite of an export statement. I know it may be a little dirty, but the Erlang compiler allows it and it makes inlining work. You could consider it templating in Erlang in the absence of explicitly supporting templates. I have found this approach useful elsewhere. > Nor does the import_static approach "crash running source code > due to errors." import_static has two purposes: > (1) to permit cross-module inlining and type inference safely; > (2) to detect a clashing update *before* changing the state of > the system in order to *avoid* crashing running source code. The two (high-level) approaches I see for implementing import_static is where either: 1) modules are made permanent to make sure the modules do not change, so functions may be inlined between modules safely 2) modules are updated in groups and the groups are created by the inlining between modules If #1 is the approach, then it should be important to have an export_static statement to explicitly allow module functions to be inlined, to avoid other developers locking your module from future changes due to their import_static usage. Using the inline compile directive for this purpose would be overloading its meaning, since it can be necessary to inline an exported function for local use within a module. If #2 is the approach, then the atomic update unit is probably best defined as an Erlang application. Then inlining between modules works, but only between modules within the same Erlang application. That group of modules has a version associated with it, and can be tracked. This is a low-level module update, which would likely need to be coordinated like Erlang application updates are coordinated now, just making the process more complex than it currently is (e.g., http://www.erlang.org/doc/man/appup.html). The export_static statement may be useful here, but it would depend on the details. > > It is precisely the header approach which *creates* the potential > for runtime problems due to version skew issues NOT BEING NOTICED. if you are relying on local functions changing instead of macros, the beam code is changing in a very noticeable way, the same as functions being modified. > > For what it's worth, I have a version of lens.erl as lens.hrl. > It's not a pretty sight. > There's heavy use of a ?LET macro to try to avoid variable > capture, multiple evaluation, and so on. > It relies a lot on the compiler inlining > (fun (X, ...) Body end)(E, ...) > into > (X' = E, ..., Body, kill X'...) > which, sadly, it does not appear to do. > >> The point of this, is to show the same source code can be used, with >> inlining, and that the potential for breakage can be handled with testing >> all the functions. A lens interface should not change a whole lot, so it >> can be a dependable interface that is trusted. > Nobody is suggesting that import_static should be COMMON or > used with frequently changed modules, only that something a > bit more disciplined and rather safer than a horrible mass of > -defines woukd be nice. I understand. My main concern is avoiding module inlining forcing modules to be permanent (unable to be updated during runtime) since that problem could easily grow in an Erlang system to negate the hot-code loading benefit Erlang normally provides, making its use harder to justify. You can find an actor model implementation in many programming languages, but Erlang's fault-tolerance features are what sets it apart, so it is important to preserve those. (I understand people like to argue about whether an actor model is the same as what was defined in the 70s and that Erlang does not fit the definition due to its differences. I only mention it due to the actor model being similar to Erlang processes.) > >>> RIGHT NOW you can get mysterious errors that go away >>> when you recompile things, due to the use of headers. >>> import_static doesn't increase the problems, or the >>> amount of work you have to do to fix them, or the nature >>> of that work. All it does is give the system a chance >>> to *notice* that the problem already exists. >> Yes. I believe this doesn't become a concern when tests are provided. > If the modules in question are tested *separately*, > the tests don't help. If you test the modules *together*, > you might as well *reload* them together. Then that seems to favor the approach #2 mentioned above. From khitai.pang@REDACTED Wed Dec 2 09:51:53 2015 From: khitai.pang@REDACTED (Khitai Pang) Date: Wed, 2 Dec 2015 16:51:53 +0800 Subject: [erlang-questions] Implementing an erlang server to host organizational data In-Reply-To: References: Message-ID: On 2015/12/1 22:50, Kevin Montuori wrote: >>>>>> "kp" == Khitai Pang writes: > kp> I am writing an erlang server to host organizational data, e.g. the > kp> hierarchy of a company. > > kp> How to store the data on disk? Mnesia? How to organize the > kp> data? > > When faced with these requirements, I'd use an LDAP directory for the > on-disk storage. There are any number of roll-you-own solutions but for > a commercial (read "real world") application it's hard to beat LDAP. > > The eldap application will get you going. Should you require a polyglot > solution you'll find there are few languages that can't handle LDAP. > > k. It seems eldap is an LDAP client api. Is there any LDAP server written in Erlang? Thanks Khitai From ameretat.reith@REDACTED Wed Dec 2 09:43:38 2015 From: ameretat.reith@REDACTED (Ameretat Reith) Date: Wed, 2 Dec 2015 12:13:38 +0330 Subject: [erlang-questions] Erlang and Docker In-Reply-To: References: Message-ID: <20151202121338.6ae7b4d7@gmail.com> > Some day in the far future, we may reach a point where Erlang is the > only language we're running (or at least the only one on a particular > server) and then it might be attractive to un-containerize it. On the > other hand, maybe not. Adding another layer of virtualization in the > form of Docker has cost us very little so far. It happened to us. We have provided -and still doing- network solutions written in C and executed in containers; at first we used Docker but after putting much energy in struggling over It's bugs for our high load environment, switched to very very more stable lxc. I can summarize our reasons for using containers into two: 1. We didn't like our users -about 3K for each service- interrupt for an update. So we would make another container, zero load to previous one and remove It when Its users gone. 2. Our softwares would not natively scale that much to use all cores in an efficient manner, they won't scale linear with more powerful hardware. We made same software with Erlang an as we did we dropped using lxc to. We write -sometime time consuming- relups and update while service is running. We don't need to wait 1 day to all users of a lxc image go and It make very good sense. And about scalability, Erlang based software scales better than linear! It's CPU usage per task drops after feeding more tasks to it and It's not slower than our legacy solutions. Ease of deployment not changed very much after using Erlang, we used DevOps tools -Canonical's Juju- to start container images which needed a well written script to build in ci server; now we just execute that shell scripts in host environment. From roberto@REDACTED Wed Dec 2 17:31:10 2015 From: roberto@REDACTED (Roberto Ostinelli) Date: Wed, 2 Dec 2015 17:31:10 +0100 Subject: [erlang-questions] [ANN] PGPool v0.7.0 Message-ID: All, I've just open sourced PGPool. PGPool is a PosgreSQL client that automatically uses connection pools and handles reconnections in case of errors. Under the hood, it uses epgsql as the PostgreSQL driver, and poolboy as pooling library. It can be found here: https://github.com/ostinelli/pgpool There are other PostgreSQL pool solutions out there, however the ones I've found either do not handle reconnects, or have outdated dependencies, etc. Hopefully this can help others. Cheers, r. -------------- next part -------------- An HTML attachment was scrubbed... URL: From serge@REDACTED Wed Dec 2 18:30:19 2015 From: serge@REDACTED (Serge Aleynikov) Date: Wed, 2 Dec 2015 12:30:19 -0500 Subject: [erlang-questions] exec-port crashing In-Reply-To: References: Message-ID: Erlexec has been running in several production systems. Yet, I can't claim that it's bug free as the application has many non-trivial features and handles various edge cases. So if you find an issue, patches are welcome. It may become a bottleneck if you heavily rely on redirection of standard I/O streams of spawned processes, in which case erlexec does the I/O marshaling between file descriptors. If that's the case, I suggest to redesign your application, so that it conveys I/O to Erlang by some other means, freeing erlexec from the role of being the "broker" and leaving it to do its main function - process supervision. On Tue, Dec 1, 2015 at 6:36 PM, Alberto Rosso wrote: > Hello guys, > our erlang application is running several c nodes and shell scripts, > handled by erlexec. For some reason i'm going to debug deeper, the port > exec-port crashes when decoding a string received from the erlang side. > > This issue brought a couple of questions to light. > Isn't it bad to use exec-port as the unique gateway for all of my c nodes? > I mean, a single crash of the exec-port frustrates the advantages of the > supervision tree we've designed to make the application robust to hardware > faults. > Another doubt is about the possibility for exec-port to become a > performance bottleneck of the whole application, as the number of c nodes > (and their throughput from/to the erlang app) is growing. > > Could be starting more than one instance of the exec application, an > (awful) solution? > > Your thoughts are really appreciated. > -- > Alberto Rosso > save a tree: please don't print this email unless strictly necessary > > L'esperienza e la filosofia che non generano l'indulgenza e la carit? sono > due acquisti che non valgono quello che costano. (Alexandre Dumas figlio) > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.shneyderman@REDACTED Wed Dec 2 19:23:24 2015 From: a.shneyderman@REDACTED (Alex Shneyderman) Date: Wed, 2 Dec 2015 13:23:24 -0500 Subject: [erlang-questions] Implementing an erlang server to host organizational data In-Reply-To: References: Message-ID: OpenLDAP is one of the simplest to utilize LDAP servers http://linux.die.net/man/8/slapd I would also like to point you into the direction of RethinkDB which might help you with your requirement to propagate changes to the listening clients. Cheers, Alex. On Wed, Dec 2, 2015 at 3:51 AM, Khitai Pang wrote: > On 2015/12/1 22:50, Kevin Montuori wrote: >>>>>>> >>>>>>> "kp" == Khitai Pang writes: >> >> kp> I am writing an erlang server to host organizational data, e.g. >> the >> kp> hierarchy of a company. >> kp> How to store the data on disk? Mnesia? How to organize the >> kp> data? >> >> When faced with these requirements, I'd use an LDAP directory for the >> on-disk storage. There are any number of roll-you-own solutions but for >> a commercial (read "real world") application it's hard to beat LDAP. >> >> The eldap application will get you going. Should you require a polyglot >> solution you'll find there are few languages that can't handle LDAP. >> >> k. > > > It seems eldap is an LDAP client api. Is there any LDAP server written in > Erlang? > > Thanks > Khitai > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From kunthar@REDACTED Wed Dec 2 20:07:28 2015 From: kunthar@REDACTED (Gokhan Boranalp) Date: Wed, 2 Dec 2015 21:07:28 +0200 Subject: [erlang-questions] Erlang and Docker In-Reply-To: <20151202121338.6ae7b4d7@gmail.com> References: <20151202121338.6ae7b4d7@gmail.com> Message-ID: Hi all, We had several problems mostly because docker based containerized solutions not perfectly fit to resource intensive Erlang apps. lxc and nspawn[1] seems much more reliable to use. We use Coreos, Fleet and Consul for Python apps but *not* for Erlang though. We should have something isolated and having much control on CPU and RAM as VM does but we should have quick deployment, OS with reduced abilities etc. to manage SDLC. Therefore, i would like to draw your attention of Unikernels[2]. I believe the future will shine with these minimal kernels, small set of needed drivers and a hypervisor which is only shaped for your application. As a good starting point, you can find a recipe to bake Erlang on Rumprun if you like to. [1] http://www.freedesktop.org/software/systemd/man/systemd-nspawn.html [2] https://en.wikipedia.org/wiki/Unikernel [3] https://github.com/rumpkernel/rumprun-packages/tree/master/erlang On Wed, Dec 2, 2015 at 10:43 AM, Ameretat Reith wrote: >> Some day in the far future, we may reach a point where Erlang is the >> only language we're running (or at least the only one on a particular >> server) and then it might be attractive to un-containerize it. On the >> other hand, maybe not. Adding another layer of virtualization in the >> form of Docker has cost us very little so far. > > It happened to us. We have provided -and still doing- network > solutions written in C and executed in containers; at first we used > Docker but after putting much energy in struggling over It's bugs for > our high load environment, switched to very very more stable lxc. I > can summarize our reasons for using containers into two: > > 1. We didn't like our users -about 3K for each service- interrupt for > an update. So we would make another container, zero load to previous > one and remove It when Its users gone. > > 2. Our softwares would not natively scale that much to use all cores > in an efficient manner, they won't scale linear with more powerful > hardware. > > We made same software with Erlang an as we did we dropped using lxc > to. We write -sometime time consuming- relups and update while service > is running. We don't need to wait 1 day to all users of a lxc image go > and It make very good sense. > > And about scalability, Erlang based software scales better than > linear! It's CPU usage per task drops after feeding more tasks to > it and It's not slower than our legacy solutions. > > Ease of deployment not changed very much after using Erlang, we used > DevOps tools -Canonical's Juju- to start container images which needed a > well written script to build in ci server; now we just execute that > shell scripts in host environment. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -- BR, \|/ Kunthar From lloyd@REDACTED Wed Dec 2 20:22:07 2015 From: lloyd@REDACTED (lloyd@REDACTED) Date: Wed, 2 Dec 2015 14:22:07 -0500 (EST) Subject: [erlang-questions] Erlang and Docker In-Reply-To: References: <20151202121338.6ae7b4d7@gmail.com> Message-ID: <1449084127.61852571@apps.rackspace.com> Hello, With far too little time to play, I've been interested for some time in the fast-developing world of low-cost, low-power ARM boards: e.g.: -- http://www.hardkernel.com/main/products/prdt_info.php -- https://www.olimex.com/Products/OLinuXino/A20/A20-OLinuXIno-LIME2-4GB/open-source-hardware -- etc. Among other things, these boards could provide outstanding platforms for teaching and learning Erlang and distributed computing generally--- not to mention embedded Erlang. Unikernels seem like they would be ideally suited to these resource constrained boards. Is anyone aware of efforts to integrate the two technologies? Best wishes, LRP -----Original Message----- From: "Gokhan Boranalp" Sent: Wednesday, December 2, 2015 2:07pm To: "Erlang" Subject: Re: [erlang-questions] Erlang and Docker Hi all, We had several problems mostly because docker based containerized solutions not perfectly fit to resource intensive Erlang apps. lxc and nspawn[1] seems much more reliable to use. We use Coreos, Fleet and Consul for Python apps but *not* for Erlang though. We should have something isolated and having much control on CPU and RAM as VM does but we should have quick deployment, OS with reduced abilities etc. to manage SDLC. Therefore, i would like to draw your attention of Unikernels[2]. I believe the future will shine with these minimal kernels, small set of needed drivers and a hypervisor which is only shaped for your application. As a good starting point, you can find a recipe to bake Erlang on Rumprun if you like to. [1] http://www.freedesktop.org/software/systemd/man/systemd-nspawn.html [2] https://en.wikipedia.org/wiki/Unikernel [3] https://github.com/rumpkernel/rumprun-packages/tree/master/erlang On Wed, Dec 2, 2015 at 10:43 AM, Ameretat Reith wrote: >> Some day in the far future, we may reach a point where Erlang is the >> only language we're running (or at least the only one on a particular >> server) and then it might be attractive to un-containerize it. On the >> other hand, maybe not. Adding another layer of virtualization in the >> form of Docker has cost us very little so far. > > It happened to us. We have provided -and still doing- network > solutions written in C and executed in containers; at first we used > Docker but after putting much energy in struggling over It's bugs for > our high load environment, switched to very very more stable lxc. I > can summarize our reasons for using containers into two: > > 1. We didn't like our users -about 3K for each service- interrupt for > an update. So we would make another container, zero load to previous > one and remove It when Its users gone. > > 2. Our softwares would not natively scale that much to use all cores > in an efficient manner, they won't scale linear with more powerful > hardware. > > We made same software with Erlang an as we did we dropped using lxc > to. We write -sometime time consuming- relups and update while service > is running. We don't need to wait 1 day to all users of a lxc image go > and It make very good sense. > > And about scalability, Erlang based software scales better than > linear! It's CPU usage per task drops after feeding more tasks to > it and It's not slower than our legacy solutions. > > Ease of deployment not changed very much after using Erlang, we used > DevOps tools -Canonical's Juju- to start container images which needed a > well written script to build in ci server; now we just execute that > shell scripts in host environment. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -- BR, \|/ Kunthar _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions From nem@REDACTED Wed Dec 2 22:05:44 2015 From: nem@REDACTED (Geoff Cant) Date: Wed, 2 Dec 2015 13:05:44 -0800 Subject: [erlang-questions] avoiding duplicate headers in cowboy In-Reply-To: <565DE936.9070400@research.att.com> References: <565DE936.9070400@research.att.com> Message-ID: <01E80AB6-35E9-4B64-AC80-D83431127DE8@erlang.geek.nz> Hi there, I?m sure you?re probably well along with your own code, but you may be interested in similar work. Heroku recently released Vegur - https://github.com/heroku/vegur which is the HTTP/1.1 proxy library built on top of cowboy/ranch and used at scale there. Fred and I both haunt this list if you have questions about it. (It may be overkill, but if you need a lot of spec compliance or are doing large scale proxying it is designed for those things) Cheers, -Geoff > On 1 Dec, 2015, at 10:38, Garry Hodgson wrote: > > I've got an application where I'm using cowboy to build what is, > in effect, a web proxy. It accepts http/s requests, makes some > decisions about whether and where to forward, then makes its > own request of the final endpoint and returns results. it was > originally written in webmachine and then ported to cowboy. > > everything works fine, but for one small nit. it appears that > cowboy automagically adds its own headers to the returned > results, which causes duplicate headers for server, content-type, > and content-length, as seen below. > > is there some way to avoid this? > > $ curl -X GET http://localhost:8080/v0/log/logs?maxrecords=1 -k -H "$Creds" -v > * About to connect() to localhost port 8080 (#0) > * Trying 127.0.0.1... connected > * Connected to localhost (127.0.0.1) port 8080 (#0) > > GET /v0/log/logs?maxrecords=1 HTTP/1.1 > > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.19.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > > Host: localhost:8080 > > Accept: */* > > Authorization: Bearer bc18fb0f-4b1f-493e-823c-fbefa0383e91 > > > < HTTP/1.1 200 OK > < connection: keep-alive > < server: Cowboy > < date: Tue, 01 Dec 2015 17:34:29 GMT > < content-length: 71 > < content-type: application/json > < connection: Keep-Alive > < date: Tue, 01 Dec 2015 17:34:30 GMT > < server: AT&T SECWEB 2.1.0 > < content-length: 71 > < content-type: application/json > < x-powered-by: PHP/5.4.28 > < keep-alive: timeout=5, max=100 > < > * Connection #0 to host localhost left intact > * Closing connection #0 > {"error":"Not found","error_description":"Could not read token in CTS"} > > -- > Garry Hodgson > Lead Member of Technical Staff > AT&T Chief Security Office (CSO) > > "This e-mail and any files transmitted with it are AT&T property, are confidential, and are intended solely for the use of the individual or entity to whom this e-mail is addressed. If you are not one of the named recipient(s) or otherwise have reason to believe that you have received this message in error, please notify the sender and delete this message immediately from your computer. Any other use, retention, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited." > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From bchesneau@REDACTED Wed Dec 2 22:24:58 2015 From: bchesneau@REDACTED (Benoit Chesneau) Date: Wed, 02 Dec 2015 21:24:58 +0000 Subject: [erlang-questions] avoiding duplicate headers in cowboy In-Reply-To: <01E80AB6-35E9-4B64-AC80-D83431127DE8@erlang.geek.nz> References: <565DE936.9070400@research.att.com> <01E80AB6-35E9-4B64-AC80-D83431127DE8@erlang.geek.nz> Message-ID: or https://github.com/benoitc/cowboy_revproxy ... On Wed, Dec 2, 2015 at 10:05 PM Geoff Cant wrote: > Hi there, I?m sure you?re probably well along with your own code, but you > may be interested in similar work. Heroku recently released Vegur - > https://github.com/heroku/vegur which is the HTTP/1.1 proxy library built > on top of cowboy/ranch and used at scale there. Fred and I both haunt this > list if you have questions about it. (It may be overkill, but if you need a > lot of spec compliance or are doing large scale proxying it is designed for > those things) > > Cheers, > -Geoff > > > On 1 Dec, 2015, at 10:38, Garry Hodgson wrote: > > > > I've got an application where I'm using cowboy to build what is, > > in effect, a web proxy. It accepts http/s requests, makes some > > decisions about whether and where to forward, then makes its > > own request of the final endpoint and returns results. it was > > originally written in webmachine and then ported to cowboy. > > > > everything works fine, but for one small nit. it appears that > > cowboy automagically adds its own headers to the returned > > results, which causes duplicate headers for server, content-type, > > and content-length, as seen below. > > > > is there some way to avoid this? > > > > $ curl -X GET http://localhost:8080/v0/log/logs?maxrecords=1 -k -H > "$Creds" -v > > * About to connect() to localhost port 8080 (#0) > > * Trying 127.0.0.1... connected > > * Connected to localhost (127.0.0.1) port 8080 (#0) > > > GET /v0/log/logs?maxrecords=1 HTTP/1.1 > > > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 > NSS/3.19.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > > > Host: localhost:8080 > > > Accept: */* > > > Authorization: Bearer bc18fb0f-4b1f-493e-823c-fbefa0383e91 > > > > > < HTTP/1.1 200 OK > > < connection: keep-alive > > < server: Cowboy > > < date: Tue, 01 Dec 2015 17:34:29 GMT > > < content-length: 71 > > < content-type: application/json > > < connection: Keep-Alive > > < date: Tue, 01 Dec 2015 17:34:30 GMT > > < server: AT&T SECWEB 2.1.0 > > < content-length: 71 > > < content-type: application/json > > < x-powered-by: PHP/5.4.28 > > < keep-alive: timeout=5, max=100 > > < > > * Connection #0 to host localhost left intact > > * Closing connection #0 > > {"error":"Not found","error_description":"Could not read token in CTS"} > > > > -- > > Garry Hodgson > > Lead Member of Technical Staff > > AT&T Chief Security Office (CSO) > > > > "This e-mail and any files transmitted with it are AT&T property, are > confidential, and are intended solely for the use of the individual or > entity to whom this e-mail is addressed. If you are not one of the named > recipient(s) or otherwise have reason to believe that you have received > this message in error, please notify the sender and delete this message > immediately from your computer. Any other use, retention, dissemination, > forwarding, printing, or copying of this e-mail is strictly prohibited." > > > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjtruog@REDACTED Wed Dec 2 22:40:52 2015 From: mjtruog@REDACTED (Michael Truog) Date: Wed, 02 Dec 2015 13:40:52 -0800 Subject: [erlang-questions] exec-port crashing In-Reply-To: References: Message-ID: <565F6564.4070705@gmail.com> If you were using CloudI, each external service is supervised with a configuration setting for MaxR and MaxT in the same way it applies to a supervisor's one_for_one child process. The external service OS process is forked from an Erlang port called cloudi_os_spawn. To distribute the risk and latency of creating the external service processes, a cloudi_os_spawn OS process is created for each available OS CPU based on the configured (via the Erlang VM) scheduler count (a single cloudi_os_process is responsible for spawning all the configured number of processes as separate OS processes, each with a configured number of threads for parallel communication). The only pipe communication with the external service processes is for capturing the stderr/stdout streams unbuffered for central logging. The service communication uses a single socket per thread, due to a single instance of the CloudI API being created per thread. That approach avoids the contention you have with a single cnode socket for a single cnode. Using service request timeouts for communication and TCP, allows liveness checking of the external service without the net tick time that is required for cnodes. So, while this may be slightly off-topic, the information may be useful for the concerns you have mentioned. There is more information at http://cloudi.org/faq.html#4_Erlang On 12/02/2015 09:30 AM, Serge Aleynikov wrote: > Erlexec has been running in several production systems. Yet, I can't claim that it's bug free as the application has many non-trivial features and handles various edge cases. So if you find an issue, patches are welcome. > > It may become a bottleneck if you heavily rely on redirection of standard I/O streams of spawned processes, in which case erlexec does the I/O marshaling between file descriptors. If that's the case, I suggest to redesign your application, so that it conveys I/O to Erlang by some other means, freeing erlexec from the role of being the "broker" and leaving it to do its main function - process supervision. > > On Tue, Dec 1, 2015 at 6:36 PM, Alberto Rosso > wrote: > > Hello guys, > our erlang application is running several c nodes and shell scripts, handled by erlexec. For some reason i'm going to debug deeper, the port exec-port crashes when decoding a string received from the erlang side. > > This issue brought a couple of questions to light. > Isn't it bad to use exec-port as the unique gateway for all of my c nodes? I mean, a single crash of the exec-port frustrates the advantages of the supervision tree we've designed to make the application robust to hardware faults. > Another doubt is about the possibility for exec-port to become a performance bottleneck of the whole application, as the number of c nodes (and their throughput from/to the erlang app) is growing. > > Could be starting more than one instance of the exec application, an (awful) solution? > > Your thoughts are really appreciated. > -- > Alberto Rosso > save a tree: please don't print this email unless strictly necessary > > L'esperienza e la filosofia che non generano l'indulgenza e la carit? sono due acquisti che non valgono quello che costano. (Alexandre Dumas figlio) > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok@REDACTED Thu Dec 3 03:25:57 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Thu, 3 Dec 2015 15:25:57 +1300 Subject: [erlang-questions] Updates, lenses, and why cross-module inlining would be nice In-Reply-To: <565E8D7F.5030106@gmail.com> References: <1448544949.1214768.450737385.156BF7D2@webmail.messagingengine.com> <5657EA88.1010901@gmail.com> <1462F19A-19B6-4A16-A26D-4643BC0830CB@cs.otago.ac.nz> <565CA6AC.9020306@gmail.com> <32BC6909-E775-48B0-98EB-D89716098F61@cs.otago.ac.nz> <565E8D7F.5030106@gmail.com> Message-ID: <0AB5C64B-11D3-46A5-BB6A-AF275C5D7E08@cs.otago.ac.nz> On 2/12/2015, at 7:19 pm, Michael Truog wrote: > On 12/01/2015 08:06 PM, Richard A. O'Keefe wrote: >> On 1/12/2015, at 8:42 am, Michael Truog wrote: >>> The header approach is preferable to make the dependency problems >>> a compile-time concern, rather than creating a runtime failure when >>> attempting a module update. It is important to not crash running source >>> code due to errors. >> It *would* be "preferable to make the dependency problems >> a compile-time concern". But using headers DOES NOT DO THAT. > The lens.hrl created from your lens.erl file does due to containing only > functions, that get dropped into the module that includes the header file, > making them look like local functions within the module. No, that does NOT "make the dependency problems a compile-time concern". It really doesn't. Your lens.hrl defines functions, and mine defines macros. That doesn't make any difference. Either way, this is what CREATES untracked dependencies! T1. Module A includes header H. Module A is compiled. T2. Module B includes header H. Module B is compiled. (:-) Modules A and B are compatible. T3. Header H is changed. (:-( Modules A and B are still compatible but out of date. T4. Module B is recompiled. !$#@ Modules A and B are now incompatible AND THE SYSTEM DOES NOT KNOW THIS. Now *if* you are diligent in using erlc's -M* options to create/update your Makefiles, you can ensure that both A and B will be automatically updated (when you get around to asking for updates). But there is no reason why erlc -M... couldn't track -import_static just as well as -include, and ensuring that it does would be an important part of implementing -import_static. > I have found this approach useful > elsewhere. The issue is not "is it useful". I don't see how anyone could argue against that. The issue is "is it BETTER than cross-moduling inlining" and "is it SAFER" than cross-module inlining and the answers are NO and NO. >> Nor does the import_static approach "crash running source code >> due to errors." import_static has two purposes: >> (1) to permit cross-module inlining and type inference safely; >> (2) to detect a clashing update *before* changing the state of >> the system in order to *avoid* crashing running source code. > The two (high-level) approaches I see for implementing import_static is > where either: > 1) modules are made permanent to make sure the modules do not change, > so functions may be inlined between modules safely > 2) modules are updated in groups and the groups are created by the > inlining between modules And in this respect there is no difference between cross-module inlining and headers. Except that (2) is a slightly weird way to put it. Updates should be based on the *dependency graph* just as updates triggered by changes to headers or to yecc and/or leex or to parse transforms should be based on the dependency graph. Put thus, (2) is what I had in mind. > If #1 is the approach, then it should be important to have an export_static > statement to explicitly allow module functions to be inlined, to avoid other > developers locking your module from future changes due to their > import_static usage. Using the inline compile directive for this purpose > would be overloading its meaning, since it can be necessary to inline an > exported function for local use within a module. If you want a function that is inlined within a module but not inlined when called outside, have two functions. > > If #2 is the approach, then the atomic update unit is probably best defined > as an Erlang application. There are two separate things here: *recompilation* and *reloading*. >> It is precisely the header approach which *creates* the potential >> for runtime problems due to version skew issues NOT BEING NOTICED. > if you are relying on local functions changing instead of macros, the > beam code is changing in a very noticeable way, the same as functions > being modified. Let's take a concrete example. Module A and B use lens.hrl. Initially, lens.hrl uses a two-function representation of lenses. Then lens.hrl is revised to use a three-function representation of lenses. Yes, the BEAM code for module B has changed. But so what? It could change for all sorts of reasons having NOTHING to say about interface compatibility between modules A and B. What module A needs to know is not that some other module has different BEAM code but that *it* should have different BEAM code because a header that *it* depends on has changed. And here we come to the heart of the problem, and it's long past time this was fixed. When you compile a .erl file, the .beam file includes the name of the source file, BUT IT DOES NOT INCLUDE THE NAMES OF ANY HEADER FILES THAT THE OBJECT CODE DEPENDS ON. If .beam files included header file names (and timestamps or checksums), then it would be possible to ask "is module A now out of date"? > I understand. My main concern is avoiding module inlining forcing > modules to be permanent (unable to be updated during runtime) > since that problem could easily grow in an Erlang system to negate > the hot-code loading benefit Erlang normally provides, making its > use harder to justify. You can find an actor model implementation > in many programming languages, but Erlang's fault-tolerance > features are what sets it apart, so it is important to preserve those. Completely agreed. It's also relevant that cross-module inlining is of little benefit unless the compiler inlines (fun (...) -> ... end)(...) and currently it doesn't appear to do so. >> If the modules in question are tested *separately*, >> the tests don't help. If you test the modules *together*, >> you might as well *reload* them together. > Then that seems to favor the approach #2 mentioned above. Yes. Oh, I should point out that -include files defining a bunch of functions *do* address many of my reasons for disliking preprocessor use. But not the untracked dependencies problem. From max.lapshin@REDACTED Thu Dec 3 07:20:31 2015 From: max.lapshin@REDACTED (Max Lapshin) Date: Thu, 3 Dec 2015 13:20:31 +0700 Subject: [erlang-questions] exec-port crashing In-Reply-To: <565F6564.4070705@gmail.com> References: <565F6564.4070705@gmail.com> Message-ID: Sergey, have you compared TCP localhost vs stdin/stdout pipe? Throughput and latency. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjtruog@REDACTED Thu Dec 3 08:11:36 2015 From: mjtruog@REDACTED (Michael Truog) Date: Wed, 02 Dec 2015 23:11:36 -0800 Subject: [erlang-questions] Updates, lenses, and why cross-module inlining would be nice In-Reply-To: <0AB5C64B-11D3-46A5-BB6A-AF275C5D7E08@cs.otago.ac.nz> References: <1448544949.1214768.450737385.156BF7D2@webmail.messagingengine.com> <5657EA88.1010901@gmail.com> <1462F19A-19B6-4A16-A26D-4643BC0830CB@cs.otago.ac.nz> <565CA6AC.9020306@gmail.com> <32BC6909-E775-48B0-98EB-D89716098F61@cs.otago.ac.nz> <565E8D7F.5030106@gmail.com> <0AB5C64B-11D3-46A5-BB6A-AF275C5D7E08@cs.otago.ac.nz> Message-ID: <565FEB28.4090301@gmail.com> On 12/02/2015 06:25 PM, Richard A. O'Keefe wrote: > On 2/12/2015, at 7:19 pm, Michael Truog wrote: >> On 12/01/2015 08:06 PM, Richard A. O'Keefe wrote: >>> On 1/12/2015, at 8:42 am, Michael Truog wrote: >>>> The header approach is preferable to make the dependency problems >>>> a compile-time concern, rather than creating a runtime failure when >>>> attempting a module update. It is important to not crash running source >>>> code due to errors. >>> It *would* be "preferable to make the dependency problems >>> a compile-time concern". But using headers DOES NOT DO THAT. >> The lens.hrl created from your lens.erl file does due to containing only >> functions, that get dropped into the module that includes the header file, >> making them look like local functions within the module. > No, that does NOT "make the dependency problems a compile-time > concern". It really doesn't. Your lens.hrl defines functions, > and mine defines macros. That doesn't make any difference. > Either way, this is what CREATES untracked dependencies! > > T1. Module A includes header H. > Module A is compiled. > T2. Module B includes header H. > Module B is compiled. > (:-) Modules A and B are compatible. > T3. Header H is changed. > (:-( Modules A and B are still compatible but out of date. > T4. Module B is recompiled. > !$#@ Modules A and B are now incompatible > AND THE SYSTEM DOES NOT KNOW THIS. > > Now *if* you are diligent in using erlc's -M* options > to create/update your Makefiles, you can ensure that > both A and B will be automatically updated (when you get > around to asking for updates). > > But there is no reason why erlc -M... couldn't track > -import_static just as well as -include, and ensuring that > it does would be an important part of implementing -import_static. > >>> Nor does the import_static approach "crash running source code >>> due to errors." import_static has two purposes: >>> (1) to permit cross-module inlining and type inference safely; >>> (2) to detect a clashing update *before* changing the state of >>> the system in order to *avoid* crashing running source code. >> The two (high-level) approaches I see for implementing import_static is >> where either: >> 1) modules are made permanent to make sure the modules do not change, >> so functions may be inlined between modules safely >> 2) modules are updated in groups and the groups are created by the >> inlining between modules > And in this respect there is no difference between > cross-module inlining and headers. > > Except that (2) is a slightly weird way to put it. > Updates should be based on the *dependency graph* just > as updates triggered by changes to headers or to yecc > and/or leex or to parse transforms should be based on > the dependency graph. Put thus, (2) is what I had in > mind. > >>> If the modules in question are tested *separately*, >>> the tests don't help. If you test the modules *together*, >>> you might as well *reload* them together. >> Then that seems to favor the approach #2 mentioned above. > Yes. > > Oh, I should point out that -include files defining a bunch > of functions *do* address many of my reasons for disliking > preprocessor use. But not the untracked dependencies problem. If we had header files that were only allowed to contain functions and types that we called "template files" for lack of a better name (ideally, they would have their own export statements to help distinguish between private functions/types and the interface for modules to use) AND we had the include files (and template files) versioned within the beam output (to address the untracked dependency problem). Wouldn't that approach be preferable when compared to trying to manage the module dependency graph during a runtime code upgrades? Why would the "template files" approach not be sufficient? From mjtruog@REDACTED Thu Dec 3 08:19:49 2015 From: mjtruog@REDACTED (Michael Truog) Date: Wed, 02 Dec 2015 23:19:49 -0800 Subject: [erlang-questions] exec-port crashing In-Reply-To: References: <565F6564.4070705@gmail.com> Message-ID: <565FED15.3080104@gmail.com> On 12/02/2015 10:20 PM, Max Lapshin wrote: > > Sergey, have you compared TCP localhost vs stdin/stdout pipe? > Throughput and latency. > TCP on localhost should be better due to the atomic send being limited by the MTU on the localhost interface which is normally 16436 on Linux, when compared to the atomic send on a pipe being limited by PIPE_BUF which is normally 4096. There is a definite improvement when using UNIX domain sockets instead of INET domain sockets, so hopefully https://github.com/erlang/otp/pull/612 will make its way in sometime. From flagg.abe@REDACTED Thu Dec 3 09:42:24 2015 From: flagg.abe@REDACTED (Alberto Rosso) Date: Thu, 3 Dec 2015 09:42:24 +0100 Subject: [erlang-questions] exec-port crashing In-Reply-To: <565FED15.3080104@gmail.com> References: <565F6564.4070705@gmail.com> <565FED15.3080104@gmail.com> Message-ID: In our current architecture the use of standard IO is limited and the heavy traffic goes through c-node sockets. I'm going to investigate in detail the memory corruption. Having a comparison between TCP and stdin/err pipes would be interesting indeed. I'm also interested in finding a more fault tolerant way to manage external processes: Michael the solution you implemented is interesting although difficult to integrate within our system which is not intended to supply services. Thank you all On 3 December 2015 at 08:19, Michael Truog wrote: > On 12/02/2015 10:20 PM, Max Lapshin wrote: > >> >> Sergey, have you compared TCP localhost vs stdin/stdout pipe? >> Throughput and latency. >> >> TCP on localhost should be better due to the atomic send > being limited by the MTU on the localhost interface > which is normally 16436 on Linux, when compared to the > atomic send on a pipe being limited by PIPE_BUF which > is normally 4096. There is a definite improvement > when using UNIX domain sockets instead of INET domain > sockets, so hopefully > https://github.com/erlang/otp/pull/612 will make its > way in sometime. > -- Alberto Rosso save a tree: please don't print this email unless strictly necessary L'esperienza e la filosofia che non generano l'indulgenza e la carit? sono due acquisti che non valgono quello che costano. (Alexandre Dumas figlio) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Trond.Endrestol@REDACTED Thu Dec 3 12:32:51 2015 From: Trond.Endrestol@REDACTED (=?UTF-8?Q?Trond_Endrest=C3=B8l?=) Date: Thu, 3 Dec 2015 12:32:51 +0100 (CET) Subject: [erlang-questions] Current function undefined as displayed by i(). Message-ID: Hi, I'm new to the list and new to Erlang. I'm running: FreeBSD 10.2-STABLE #0 r291578: Tue Dec 1 14:32:16 CET 2015 amd64 Erlang is erlang-18.1.5,3 from FreeBSD's ports collection (lang/erlang): Erlang/OTP 18 [erts-7.1] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:true] [dtrace] Eshell V7.1 I came across a puzzle: I have an Erlang process spawned with a function of arity 0. Running the shell command i() gives this (abbreviated) output. Pid Initial Call Heap Reds Msgs Registered Current Function Stack <0.42.0> mathserver:loop/0 987 10679 0 mathserver undefined 0 Here's start/0 from my module mathserver: start() -> Pid = erlang:spawn(?MODULE, loop, []), erlang:register(?MODULE, Pid), % Local name register global:register_name(?MODULE, Pid). % Global name register loop/0 in this module consists mainly of a receive statement, spawning new processes when it recognises a particular message. While googling for "erlang current function undefined", I tried the sample code from http://stackoverflow.com/questions/12585612/erlang-undefined-function. I made some slight modifications to the code, limited to a better module name, the use of the ?MODULE macro as appropriate, and a call to global:register_name/2. In this case the bcast_server:process_request/1 is spawned with an empty list in the argument list, i.e. [[]]. start() -> ServerPid = erlang:spawn(?MODULE, process_requests, [ [] ]), erlang:register(?MODULE, ServerPid), global:register_name(?MODULE, ServerPid). Running i() produces this (abbreviated) output: Pid Initial Call Heap Reds Msgs Registered Current Function Stack <0.57.0> bcast_server:process_requests/1 233 1 0 bcast_server bcast_server:process_requests/1 3 process_requests/1 is similar to my own loop/0. The only difference is arity of 0 versus arity of 1, i.e. non-zero. A named Erlang node bears no difference from an anonymous Erlang node. Can someone please explain why "current function" is undefined in one instance and defined in another? -- ---------------------------------------------------------------------- Trond Endrest?l | Trond.Endrestol@REDACTED ACM, NAS, NUUG, SAGE, USENIX | FreeBSD 10.2-S & Alpine 2.20 From roberto@REDACTED Thu Dec 3 15:13:58 2015 From: roberto@REDACTED (Roberto Ostinelli) Date: Thu, 3 Dec 2015 15:13:58 +0100 Subject: [erlang-questions] [ANN] Syn 0.9.0 Message-ID: Dear all, I've just released Syn 0.9.0. We've enforced registration atomicity at node level, and disallowed registering a pid under multiple aliases. For those of you who don't know it, Syn is a global process registry for Erlang. https://github.com/ostinelli/syn Best, r. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andra.dinu@REDACTED Thu Dec 3 15:18:54 2015 From: andra.dinu@REDACTED (Andra Dinu) Date: Thu, 3 Dec 2015 14:18:54 +0000 Subject: [erlang-questions] [ANN] 'Erlang in the Particle Accelerator- at Fermilab' - Webinar 16 Dec 17:00 GMT Message-ID: Join us on 16 Dec at 17:00 GMT (11:00 CST; 9:00 PST) to find out how Erlang is used at Fermilab, America's premier particle physics laboratory and particle accelerator. Dennis J. Nicklaus and Richard M Neswold will describe how Erlang is used in controls applications of particle physics research. They will provide an overview of some of the scientific experiments ongoing at the lab and the accelerator infrastructure which provides beam to those experiments. They will show where the Erlang-based components fit in the accelerator control system, including a framework for accelerator controls data acquisition and a data pool manager for control system client data requests. They will delve into the data pool manager more deeply to illustrate why Erlang was an excellent choice for this application, and show features of Erlang that expand the capabilities of the data pool manager. Register: http://bit.ly/1NnXOYk Best, Andra *Andra Dinu* Community & Social Largest Erlang & Elixir event in the US! Erlang Factory San Francisco Bay Area 10-11 March 2016 -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lapshin@REDACTED Thu Dec 3 15:30:55 2015 From: max.lapshin@REDACTED (Max Lapshin) Date: Thu, 3 Dec 2015 21:30:55 +0700 Subject: [erlang-questions] Explain please easter egg in lager: boston_lager Message-ID: lager_transform.erl has strange transform for boston_lager:debug(....) All "r" are replaced with "h" in format message. Is it an allusion to specific pronunciation? -------------- next part -------------- An HTML attachment was scrubbed... URL: From seancribbs@REDACTED Thu Dec 3 15:40:34 2015 From: seancribbs@REDACTED (Sean Cribbs) Date: Thu, 3 Dec 2015 08:40:34 -0600 Subject: [erlang-questions] Explain please easter egg in lager: boston_lager In-Reply-To: References: Message-ID: https://www.youtube.com/watch?v=RbK4cL3QSc0 On Thu, Dec 3, 2015 at 8:30 AM, Max Lapshin wrote: > > lager_transform.erl has strange transform for > > boston_lager:debug(....) > > All "r" are replaced with "h" in format message. > > Is it an allusion to specific pronunciation? > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lapshin@REDACTED Thu Dec 3 15:41:46 2015 From: max.lapshin@REDACTED (Max Lapshin) Date: Thu, 3 Dec 2015 21:41:46 +0700 Subject: [erlang-questions] Explain please easter egg in lager: boston_lager In-Reply-To: References: Message-ID: Seems to be some thin difference. On Thu, Dec 3, 2015 at 9:40 PM, Sean Cribbs wrote: > https://www.youtube.com/watch?v=RbK4cL3QSc0 > > On Thu, Dec 3, 2015 at 8:30 AM, Max Lapshin wrote: > >> >> lager_transform.erl has strange transform for >> >> boston_lager:debug(....) >> >> All "r" are replaced with "h" in format message. >> >> Is it an allusion to specific pronunciation? >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Thu Dec 3 15:54:44 2015 From: zxq9@REDACTED (zxq9) Date: Thu, 03 Dec 2015 23:54:44 +0900 Subject: [erlang-questions] Explain please easter egg in lager: boston_lager In-Reply-To: References: Message-ID: <1952948.vsTV3468hE@changa> I park my car nearby so I don't have to walk too far. I paahk mah caah neahbai sow I don't have tuh waahk too faah. If you go to Boston the effects sort of pile up (the accent, food, things people talk about, typical Boston casual full-of-hate-but-not-really speaking style, etc.). When you're there you feel its a pretty heavy cultural difference. Sometimes crossing state lines in the US (or even city limits) can feel like more of a culture shock than crossing some national boundaries. On 2015?12?3? ??? 21:41:46 Max Lapshin wrote: > Seems to be some thin difference. > > On Thu, Dec 3, 2015 at 9:40 PM, Sean Cribbs wrote: > > > https://www.youtube.com/watch?v=RbK4cL3QSc0 > > > > On Thu, Dec 3, 2015 at 8:30 AM, Max Lapshin wrote: > > > >> > >> lager_transform.erl has strange transform for > >> > >> boston_lager:debug(....) > >> > >> All "r" are replaced with "h" in format message. > >> > >> Is it an allusion to specific pronunciation? > >> > >> > >> _______________________________________________ > >> erlang-questions mailing list > >> erlang-questions@REDACTED > >> http://erlang.org/mailman/listinfo/erlang-questions > >> > >> > > From michael.santos@REDACTED Thu Dec 3 16:12:31 2015 From: michael.santos@REDACTED (Michael Santos) Date: Thu, 3 Dec 2015 10:12:31 -0500 Subject: [erlang-questions] exec-port crashing In-Reply-To: References: <565F6564.4070705@gmail.com> Message-ID: <20151203151231.GA20782@brk> On Thu, Dec 03, 2015 at 01:20:31PM +0700, Max Lapshin wrote: > Sergey, have you compared TCP localhost vs stdin/stdout pipe? > Throughput and latency. Some benchmarks comparing pipes vs sockets: https://wiki.openlighting.org/index.php/Pipe_vs_Unix_Socket_Performance https://sites.google.com/site/rikkus/sysv-ipc-vs-unix-pipes-vs-unix-sockets From montuori@REDACTED Thu Dec 3 16:23:27 2015 From: montuori@REDACTED (Kevin Montuori) Date: Thu, 03 Dec 2015 09:23:27 -0600 Subject: [erlang-questions] Explain please easter egg in lager: boston_lager In-Reply-To: (Max Lapshin's message of "Thu, 3 Dec 2015 21:30:55 +0700") References: Message-ID: >>>>> "ml" == Max Lapshin writes: ml> boston_lager:debug(....) ml> Is it an allusion to specific pronunciation? As other's mentioned, yes. And also, Boston Lager is one of the Boston based Sam Adams Brewery's flagship beeahs. k. -- Kevin Montuori montuori@REDACTED From jesper.louis.andersen@REDACTED Thu Dec 3 16:34:32 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Thu, 3 Dec 2015 16:34:32 +0100 Subject: [erlang-questions] Explain please easter egg in lager: boston_lager In-Reply-To: References: Message-ID: Oh, was this very important feature removed in Lager 3.x ? On Thu, Dec 3, 2015 at 3:30 PM, Max Lapshin wrote: > > lager_transform.erl has strange transform for > > boston_lager:debug(....) > > All "r" are replaced with "h" in format message. > > Is it an allusion to specific pronunciation? > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto@REDACTED Thu Dec 3 16:37:06 2015 From: roberto@REDACTED (Roberto Ostinelli) Date: Thu, 3 Dec 2015 16:37:06 +0100 Subject: [erlang-questions] epgsql queue empty errors Message-ID: All, I'm often seeing CRASH reports of epgsql queue empty: CRASH REPORT Process <0.4709.0> with 0 neighbours exited with reason: {empty,[{queue,get,[{[],[]}],[{file,"queue.erl"},{line,183}]},{epgsql_sock,command_tag,1,[{file,"src/epgsql_sock.erl"},{line,409}]},{epgsql_sock,on_message,2,[{file,"src/epgsql_sock.erl"},{line,684}]},{epgsql_sock,loop,1,[{file,"src/epgsql_sock.erl"},{line,334}]},{gen_server,try_dispatch,4,[{file,"gen_server.erl"},{line,615}]},{gen_server,handle_msg,5,[{file,"gen_server.erl"},{line,681}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,240}]}]} in gen_server:terminate/7 line 826 These all happen in epgsql_sock here: https://github.com/epgsql/epgsql/blob/3.1.1/src/epgsql_sock.erl#L409 Has anyone seen this behavior and has more insights to share? Best, r. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Trond.Endrestol@REDACTED Thu Dec 3 18:00:27 2015 From: Trond.Endrestol@REDACTED (=?UTF-8?Q?Trond_Endrest=C3=B8l?=) Date: Thu, 3 Dec 2015 18:00:27 +0100 (CET) Subject: [erlang-questions] Current function undefined as displayed by i(). In-Reply-To: References: Message-ID: On Thu, 3 Dec 2015 12:32+0100, Trond Endrest?l wrote: > Can someone please explain why "current function" is undefined in one > instance and defined in another? To answer myself and leave a trace of what I discovered. HiPE aka -compile([native])., was the cause of this confusion. -- ---------------------------------------------------------------------- Trond Endrest?l | Trond.Endrestol@REDACTED ACM, NAS, NUUG, SAGE, USENIX | FreeBSD 10.2-S & Alpine 2.20 From serge@REDACTED Fri Dec 4 05:44:27 2015 From: serge@REDACTED (Serge Aleynikov) Date: Thu, 3 Dec 2015 23:44:27 -0500 Subject: [erlang-questions] exec-port crashing In-Reply-To: <565FED15.3080104@gmail.com> References: <565F6564.4070705@gmail.com> <565FED15.3080104@gmail.com> Message-ID: Though this does depend on whether one is measuring throughput vs. latency. Generally, TCP would always be more latent than pipes on Linux since pipes marshaling doesn't need to go through the TCP stack, which adds penalty. As you correctly observed, UNIX domain sockets, don't involve that overhead, and have latency performance comparable to pipes, but throughput better than pipes. On Thu, Dec 3, 2015 at 2:19 AM, Michael Truog wrote: > On 12/02/2015 10:20 PM, Max Lapshin wrote: > >> >> Sergey, have you compared TCP localhost vs stdin/stdout pipe? >> Throughput and latency. >> >> TCP on localhost should be better due to the atomic send > being limited by the MTU on the localhost interface > which is normally 16436 on Linux, when compared to the > atomic send on a pipe being limited by PIPE_BUF which > is normally 4096. There is a definite improvement > when using UNIX domain sockets instead of INET domain > sockets, so hopefully > https://github.com/erlang/otp/pull/612 will make its > way in sometime. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bchesneau@REDACTED Fri Dec 4 06:04:23 2015 From: bchesneau@REDACTED (Benoit Chesneau) Date: Fri, 04 Dec 2015 05:04:23 +0000 Subject: [erlang-questions] maps API Message-ID: What is the reasoning of throwing an error on `maps:get/2` instead of returning `undefined`? And instead have `maps:find/2`. I am curious. I would have expected that get like a proplists would return a default set to undefined. - benoit -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Fri Dec 4 09:22:47 2015 From: zxq9@REDACTED (zxq9) Date: Fri, 04 Dec 2015 17:22:47 +0900 Subject: [erlang-questions] maps API In-Reply-To: References: Message-ID: <2725126.MxGNFQvtuD@burrito> On Friday 04 December 2015 05:04:23 Benoit Chesneau wrote: > What is the reasoning of throwing an error on `maps:get/2` instead of > returning `undefined`? And instead have `maps:find/2`. I am curious. I > would have expected that get like a proplists would return a default set to > undefined. I've always wondered why these were not universalized across these different modules: fetch(Key, Container) -> Value. % or throw exception find(Key, Container) -> {ok, Value} | error. get(Key, Container) -> get(Key, Value, undefined). get(Key, Container, Default) -> Value | Default. find/2, as defined above could of course be defined in terms of get/3, and the specific names of the functions is a bit arbitrary -- but my point is that a bit more consistency would be nice. It is nice to have one that *always* throws an exception on a miss. Its nice to have one that returns a reliable response on a miss. Its nice to have one that always returns a custom default on a miss. Its natural to crash or carry on without having to wrap calls in try..except or make sure that a structure is defined in a way that a stored value will *never* be 'error' or 'undefined' or whatever. All three exist in various APIs, but its one of those weird cases of "A, B, C" <- pick any two. Since its Christmas, I would also ask Santa for a partition/2 that was common to all containers-of-lists type thingies. -Craig From bchesneau@REDACTED Fri Dec 4 11:46:43 2015 From: bchesneau@REDACTED (Benoit Chesneau) Date: Fri, 04 Dec 2015 10:46:43 +0000 Subject: [erlang-questions] maps API In-Reply-To: <2725126.MxGNFQvtuD@burrito> References: <2725126.MxGNFQvtuD@burrito> Message-ID: i would pre On Fri, Dec 4, 2015 at 9:23 AM zxq9 wrote: > On Friday 04 December 2015 05:04:23 Benoit Chesneau wrote: > > What is the reasoning of throwing an error on `maps:get/2` instead of > > returning `undefined`? And instead have `maps:find/2`. I am curious. I > > would have expected that get like a proplists would return a default set > to > > undefined. > > I've always wondered why these were not universalized across these > different modules: > > fetch(Key, Container) -> Value. % or throw exception > > find(Key, Container) -> {ok, Value} | error. > > get(Key, Container) -> get(Key, Value, undefined). > get(Key, Container, Default) -> Value | Default. > > find/2, as defined above could of course be defined in terms of get/3, and > the specific names of the functions is a bit arbitrary -- but my point is > that a bit more consistency would be nice. > > It is nice to have one that *always* throws an exception on a miss. Its > nice to have one that returns a reliable response on a miss. Its nice to > have one that always returns a custom default on a miss. Its natural to > crash or carry on without having to wrap calls in try..except or make sure > that a structure is defined in a way that a stored value will *never* be > 'error' or 'undefined' or whatever. All three exist in various APIs, but > its one of those weird cases of "A, B, C" <- pick any two. > > I would have expected to crash on a non matching result instead like `{error, notfound}` or `{error, bad_map}` rather than raising an exception. IMO, it's a question of convenience there, since throwing an error remove the need to match a result (ie. `Value = maps:get(Maps, Key)` instead of `{ok, Value} = maps:get(Maps, Key)` . I guess I wanted to confirm it or maybe there is another reason for it? - benoit -------------- next part -------------- An HTML attachment was scrubbed... URL: From henrik.x.nord@REDACTED Fri Dec 4 13:16:36 2015 From: henrik.x.nord@REDACTED (Henrik Nord X) Date: Fri, 4 Dec 2015 13:16:36 +0100 Subject: [erlang-questions] Patch package OTP 17.5.6.5 released Message-ID: <56618424.6010507@ericsson.com> Patch Package: OTP 17.5.6.5 Git Tag: OTP-17.5.6.5 Date: 2015-12-04 Trouble Report Id: OTP-11482, OTP-13151 Seq num: seq12872 System: OTP Release: 17 Application: erts-6.4.1.4, kernel-3.2.0.1, ssl-6.0.1.1 Predecessor: OTP 17.5.6.4 Check out the git tag OTP-17.5.6.5, and build a full OTP system including documentation. Apply one or more applications from this build as patches to your installation using the 'otp_patch_apply' tool. For information on install requirements, see descriptions for each application version below. --------------------------------------------------------------------- --- erts-6.4.1.4 ---------------------------------------------------- --------------------------------------------------------------------- The erts-6.4.1.4 application can be applied independently of other applications on a full OTP 17 installation. --- Fixed Bugs and Malfunctions --- OTP-11482 Application(s): erts, kernel Related Id(s): seq12872 The 'raw' socket option could not be used multiple times in one call to any e.g gen_tcp function because only one of the occurrences were used. This bug has been fixed, and also a small bug concerning propagating error codes from within inet:setopts/2. Full runtime dependencies of erts-6.4.1.4: kernel-3.0, sasl-2.4, stdlib-2.0 --------------------------------------------------------------------- --- kernel-3.2.0.1 -------------------------------------------------- --------------------------------------------------------------------- Note! The kernel-3.2.0.1 application can *not* be applied independently of other applications on an arbitrary OTP 17 installation. On a full OTP 17 installation, also the following runtime dependency has to be satisfied: -- erts-6.1.2 (first satisfied in OTP 17.1.2) --- Fixed Bugs and Malfunctions --- OTP-11482 Application(s): erts, kernel Related Id(s): seq12872 The 'raw' socket option could not be used multiple times in one call to any e.g gen_tcp function because only one of the occurrences were used. This bug has been fixed, and also a small bug concerning propagating error codes from within inet:setopts/2. Full runtime dependencies of kernel-3.2.0.1: erts-6.1.2, sasl-2.4, stdlib-2.0 --------------------------------------------------------------------- --- ssl-6.0.1.1 ----------------------------------------------------- --------------------------------------------------------------------- The ssl-6.0.1.1 application can be applied independently of other applications on a full OTP 17 installation. --- Fixed Bugs and Malfunctions --- OTP-13151 Application(s): ssl Gracefully ignore proprietary hash_sign algorithms Full runtime dependencies of ssl-6.0.1.1: crypto-3.3, erts-6.0, kernel-3.0, public_key-0.22, stdlib-2.0 --------------------------------------------------------------------- --------------------------------------------------------------------- --------------------------------------------------------------------- From borja.carbo@REDACTED Fri Dec 4 16:29:21 2015 From: borja.carbo@REDACTED (borja.carbo@REDACTED) Date: Fri, 04 Dec 2015 15:29:21 +0000 Subject: [erlang-questions] xsltproc - erl_doc runtime error: Variable 'stylesheet' has not been declared. Message-ID: <20151204152921.Horde.T8BiIWF_Vt8xDZALLTz5-Q1@whm.todored.info> Trying to follow the examples from the erl_doc documentation I have found the error from the title and I reproduce here. laptop@REDACTED:~/MyProjects/InErlang/lib/basura-1.0.0/doc$ xsltproc --noout \ > --stringparam outdir . \ > --stringparam topdocdir . \ > --stringparam pdfdir . \ > --xinclude \ > --stringparam docgen /usr/local/lib/erlang/lib/erl_docgen-0.4 \ > --stringparam gendate "December 5 2011" \ > --stringparam appname application_example \ > --stringparam appver 1.0.0 \ > -path /usr/local/lib/erlang/lib/erl_docgen-0.4/priv/dtd \ > -path /usr/local/lib/erlang/lib/erl_docgen-0.4/priv/dtd_html_entities \ > /usr/local/lib/erlang/lib/erl_docgen-0.4/priv/xsl/db_html.xsl \ > application_example.xml runtime error: file /usr/local/lib/erlang/lib/erl_docgen-0.4/priv/xsl/db_html.xsl line 582 element choose Variable 'stylesheet' has not been declared. laptop@REDACTED:~/MyProjects/InErlang/lib/basura-1.0.0/doc$ The file I used is just a simple modification of the example in the same documentation. Attached and named "application_example.xml". file placed in the same directory from where the command is executed. The command generated a small file (attached and named "Application_Example_app.html") although meaningless. Is this a bug or simple I missed something? -------------- next part -------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: application_example.xml Type: application/xml Size: 891 bytes Desc: not available URL: From kenneth@REDACTED Fri Dec 4 17:29:11 2015 From: kenneth@REDACTED (Kenneth Lundin) Date: Fri, 4 Dec 2015 17:29:11 +0100 Subject: [erlang-questions] xsltproc - erl_doc runtime error: Variable 'stylesheet' has not been declared. In-Reply-To: <20151204152921.Horde.T8BiIWF_Vt8xDZALLTz5-Q1@whm.todored.info> References: <20151204152921.Horde.T8BiIWF_Vt8xDZALLTz5-Q1@whm.todored.info> Message-ID: Hi, Look in https://github.com/erlang/otp/blob/maint/make/otp_release_targets.mk for an example of how to call xsltproc. The variable stylesheet is supposed to be given as a stringparam to the call of xslt. /Kenneth Den 4 dec 2015 16:30 skrev : > > Trying to follow the examples from the erl_doc documentation I have found > the error from the title and I reproduce here. > > > laptop@REDACTED:~/MyProjects/InErlang/lib/basura-1.0.0/doc$ xsltproc > --noout \ > >> --stringparam outdir . \ >> --stringparam topdocdir . \ >> --stringparam pdfdir . \ >> --xinclude \ >> --stringparam docgen /usr/local/lib/erlang/lib/erl_docgen-0.4 \ >> --stringparam gendate "December 5 2011" \ >> --stringparam appname application_example \ >> --stringparam appver 1.0.0 \ >> -path /usr/local/lib/erlang/lib/erl_docgen-0.4/priv/dtd \ >> -path /usr/local/lib/erlang/lib/erl_docgen-0.4/priv/dtd_html_entities \ >> /usr/local/lib/erlang/lib/erl_docgen-0.4/priv/xsl/db_html.xsl \ >> application_example.xml >> > runtime error: file > /usr/local/lib/erlang/lib/erl_docgen-0.4/priv/xsl/db_html.xsl line 582 > element choose > Variable 'stylesheet' has not been declared. > laptop@REDACTED:~/MyProjects/InErlang/lib/basura-1.0.0/doc$ > > > The file I used is just a simple modification of the example in the same > documentation. Attached and named "application_example.xml". file placed in > the same directory from where the command is executed. > > The command generated a small file (attached and named > "Application_Example_app.html") although meaningless. > > Is this a bug or simple I missed something? > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nathaniel@REDACTED Fri Dec 4 18:24:24 2015 From: nathaniel@REDACTED (Nathaniel Waisbrot) Date: Fri, 4 Dec 2015 12:24:24 -0500 Subject: [erlang-questions] gen_server loop pattern Message-ID: I've been writing worker-processes that are mostly non-interactive as gen_servers. It looks something like this (skipping the boring bits, but I can include a full example if this is unclear): ``` -module(fetcher). -behavior(gen_server). init(State) -> gen_server:cast(?MODULE, start), {ok, State}. handle_cast(start, State) -> reschedule(State), {noreply, State}; handle_cast(loop, State) -> {Work, State1} = pull_from_queue(State), worker_pool:task(Work), reschedule(State1), {noreply, State1}; handle_cast({checkpoint, CheckpointData}, State) -> State1 = do_checkpoint(CheckpointData, State), {noreply, State1}. reschedule(State) -> timer:apply_after(State#state.delay_ms, gen_server, cast, [?MODULE, loop]). ``` It feels like I ought to use some other pattern, but I don't know what. There's a few reasons that I've chosen gen_server: * It's easy. The gen_server behavior is simple and I've already written plenty of them. * I can hook it easily to get some simple debug capability. I often add a ``` handle_call(get_state, _From, State) -> {reply, State, State}. ``` * Likewise, I can pause and restart the worker by writing a few very simple handle_* functions. * The "checkpoint" message is used to send a checkpoint request to all workers and then record a safe spot in the queue once I get a reply back from all of them. This doesn't have to be in the `fetcher` module, but once it's already a gen_server it seems easy to wedge it in. Is there a better way to do this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From garry@REDACTED Fri Dec 4 20:37:35 2015 From: garry@REDACTED (Garry Hodgson) Date: Fri, 4 Dec 2015 14:37:35 -0500 Subject: [erlang-questions] avoiding duplicate headers in cowboy In-Reply-To: References: <565DE936.9070400@research.att.com> <01E80AB6-35E9-4B64-AC80-D83431127DE8@erlang.geek.nz> Message-ID: <5661EB7F.4020806@research.att.com> thanks for the pointers. our particular app is pretty specific to our needs. but i always like to study how other people have tackled similar problems. there's so much to learn. On 12/2/15 4:24 PM, Benoit Chesneau wrote: > or https://github.com/benoitc/cowboy_revproxy ... > > On Wed, Dec 2, 2015 at 10:05 PM Geoff Cant > wrote: > > Hi there, I?m sure you?re probably well along with your own code, > but you may be interested in similar work. Heroku recently > released Vegur - https://github.com/heroku/vegur which is the > HTTP/1.1 proxy library built on top of cowboy/ranch and used at > scale there. Fred and I both haunt this list if you have questions > about it. (It may be overkill, but if you need a lot of spec > compliance or are doing large scale proxying it is designed for > those things) > > Cheers, > -Geoff > > > On 1 Dec, 2015, at 10:38, Garry Hodgson > wrote: > > > > I've got an application where I'm using cowboy to build what is, > > in effect, a web proxy. It accepts http/s requests, makes some > > decisions about whether and where to forward, then makes its > > own request of the final endpoint and returns results. it was > > originally written in webmachine and then ported to cowboy. > > > > everything works fine, but for one small nit. it appears that > > cowboy automagically adds its own headers to the returned > > results, which causes duplicate headers for server, content-type, > > and content-length, as seen below. > > > > is there some way to avoid this? > > > > $ curl -X GET http://localhost:8080/v0/log/logs?maxrecords=1 -k > -H "$Creds" -v > > * About to connect() to localhost port 8080 (#0) > > * Trying 127.0.0.1... connected > > * Connected to localhost (127.0.0.1) port 8080 (#0) > > > GET /v0/log/logs?maxrecords=1 HTTP/1.1 > > > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) > libcurl/7.19.7 NSS/3.19.1 Basic ECC zlib/1.2.3 libidn/1.18 > libssh2/1.4.2 > > > Host: localhost:8080 > > > Accept: */* > > > Authorization: Bearer bc18fb0f-4b1f-493e-823c-fbefa0383e91 > > > > > < HTTP/1.1 200 OK > > < connection: keep-alive > > < server: Cowboy > > < date: Tue, 01 Dec 2015 17:34:29 GMT > > < content-length: 71 > > < content-type: application/json > > < connection: Keep-Alive > > < date: Tue, 01 Dec 2015 17:34:30 GMT > > < server: AT&T SECWEB 2.1.0 > > < content-length: 71 > > < content-type: application/json > > < x-powered-by: PHP/5.4.28 > > < keep-alive: timeout=5, max=100 > > < > > * Connection #0 to host localhost left intact > > * Closing connection #0 > > {"error":"Not found","error_description":"Could not read token > in CTS"} > > > > -- > > Garry Hodgson > > Lead Member of Technical Staff > > AT&T Chief Security Office (CSO) > > > > "This e-mail and any files transmitted with it are AT&T > property, are confidential, and are intended solely for the use of > the individual or entity to whom this e-mail is addressed. If you > are not one of the named recipient(s) or otherwise have reason to > believe that you have received this message in error, please > notify the sender and delete this message immediately from your > computer. Any other use, retention, dissemination, forwarding, > printing, or copying of this e-mail is strictly prohibited." > > > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrallen1@REDACTED Fri Dec 4 21:41:25 2015 From: mrallen1@REDACTED (Mark Allen) Date: Fri, 4 Dec 2015 20:41:25 +0000 (UTC) Subject: [erlang-questions] Explain please easter egg in lager: boston_lager In-Reply-To: References: Message-ID: <598509770.15525587.1449261685062.JavaMail.yahoo@mail.yahoo.com> All the code we could eliminate we tried to eliminate in 3.x. On Thursday, December 3, 2015 9:35 AM, Jesper Louis Andersen wrote: Oh, was this very important feature removed in Lager 3.x ? On Thu, Dec 3, 2015 at 3:30 PM, Max Lapshin wrote: lager_transform.erl has strange transform for? boston_lager:debug(....) All "r" are replaced with "h" in format message. Is it an allusion to specific pronunciation? _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -- J. _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From borja.carbo@REDACTED Sat Dec 5 09:43:27 2015 From: borja.carbo@REDACTED (borja.carbo@REDACTED) Date: Sat, 05 Dec 2015 08:43:27 +0000 Subject: [erlang-questions] xsltproc - erl_doc runtime error: Variable 'stylesheet' has not been declared. In-Reply-To: References: <20151204152921.Horde.T8BiIWF_Vt8xDZALLTz5-Q1@whm.todored.info> Message-ID: <20151205084327.Horde.DJLODZWblghD9-hbVfYm2w4@whm.todored.info> Hi You are completly right, Not just stylesheet, but also winprefix, pdfname and logo seems to be compulsory. I have discovered some interdependences with regards to values for the directory variables. With the modifications (appended below) I managed to generate the documentation. Is there anyway the same do not happen to other Erlang users? Thanks and Best Regards / Borja laptop@REDACTED:~/MyProjects/InErlang/lib/basura-1.0.0/doc$ xsltproc --noout \ > --stringparam outdir > /home/laptop/MyProjects/InErlang/lib/basura-1.0.0/doc/ \ > --stringparam topdocdir /usr/local/lib/erlang/doc\ > --stringparam pdfdir . \ > --xinclude \ > --stringparam stylesheet ../lib/erl_docgen-0.4/priv/css/otp_doc.css \ > --stringparam winprefix Erlang \ > --stringparam logo ../lib/erl_docgen-0.4/priv/images/erlang-logo.png \ > --stringparam pdfname pdfname \ > --stringparam docgen /usr/local/lib/erlang/lib/erl_docgen-0.4 \ > --stringparam gendate "December 5 2011" \ > --stringparam appname application_example \ > --stringparam appver 1.0.0 \ > -path /usr/local/lib/erlang/lib/erl_docgen-0.4/priv/dtd \ > -path /usr/local/lib/erlang/lib/erl_docgen-0.4/priv/dtd_html_entities \ > /usr/local/lib/erlang/lib/erl_docgen-0.4/priv/xsl/db_html.xsl \ > application_example.xml laptop@REDACTED:~/MyProjects/InErlang/lib/basura-1.0.0/doc$ S'est? citant Kenneth Lundin : > Hi, > > Look in https://github.com/erlang/otp/blob/maint/make/otp_release_targets.mk > for an example of how to call xsltproc. The variable stylesheet is supposed > to be given as a stringparam to the call of xslt. > > /Kenneth > Den 4 dec 2015 16:30 skrev : > >> >> Trying to follow the examples from the erl_doc documentation I have found >> the error from the title and I reproduce here. >> >> >> laptop@REDACTED:~/MyProjects/InErlang/lib/basura-1.0.0/doc$ xsltproc >> --noout \ >> >>> --stringparam outdir . \ >>> --stringparam topdocdir . \ >>> --stringparam pdfdir . \ >>> --xinclude \ >>> --stringparam docgen /usr/local/lib/erlang/lib/erl_docgen-0.4 \ >>> --stringparam gendate "December 5 2011" \ >>> --stringparam appname application_example \ >>> --stringparam appver 1.0.0 \ >>> -path /usr/local/lib/erlang/lib/erl_docgen-0.4/priv/dtd \ >>> -path /usr/local/lib/erlang/lib/erl_docgen-0.4/priv/dtd_html_entities \ >>> /usr/local/lib/erlang/lib/erl_docgen-0.4/priv/xsl/db_html.xsl \ >>> application_example.xml >>> >> runtime error: file >> /usr/local/lib/erlang/lib/erl_docgen-0.4/priv/xsl/db_html.xsl line 582 >> element choose >> Variable 'stylesheet' has not been declared. >> laptop@REDACTED:~/MyProjects/InErlang/lib/basura-1.0.0/doc$ >> >> >> The file I used is just a simple modification of the example in the same >> documentation. Attached and named "application_example.xml". file placed in >> the same directory from where the command is executed. >> >> The command generated a small file (attached and named >> "Application_Example_app.html") although meaningless. >> >> Is this a bug or simple I missed something? >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> From icfp.publicity@REDACTED Sun Dec 6 04:40:09 2015 From: icfp.publicity@REDACTED (Lindsey Kuper) Date: Sun, 06 Dec 2015 03:40:09 +0000 Subject: [erlang-questions] ICFP 2016 Call for Papers Message-ID: <001a113f3cfe47a3270526327fd2@google.com>                               ICFP 2016 The 21st ACM SIGPLAN International Conference on Functional Programming                http://conf.researchr.org/home/icfp-2016                            Call for Papers Important dates --------------- Submissions due:    Wednesday, March 16 2016, 15:00 (UTC)                     https://icfp2016.hotcrp.com                     (in preparation as of December 1) Author response:    Monday, 2 May, 2016, 15:00 (UTC) -                     Thursday, 5 May, 2016, 15:00 (UTC) Notification:       Friday, 20 May, 2016 Final copy due:     TBA Early registration: TBA Conference:         Tuesday, 20 September -                     Thursday, 22 September, 2016 Scope ----- ICFP 2016 seeks original papers on the art and science of functional programming. Submissions are invited on all topics from principles to practice, from foundations to features, and from abstraction to application. The scope includes all languages that encourage functional programming, including both purely applicative and imperative languages, as well as languages with objects, concurrency, or parallelism. Topics of interest include (but are not limited to): - Language Design: concurrency, parallelism, and distribution;   modules; components and composition; metaprogramming; type systems;   interoperability; domain-specific languages; and relations to   imperative, object-oriented, or logic programming. - Implementation: abstract machines; virtual machines; interpretation;   compilation; compile-time and run-time optimization; garbage   collection and memory management; multi-threading; exploiting   parallel hardware; interfaces to foreign functions, services,   components, or low-level machine resources. - Software-Development Techniques: algorithms and data structures;   design patterns; specification; verification; validation; proof   assistants; debugging; testing; tracing; profiling. - Foundations: formal semantics; lambda calculus; rewriting; type   theory; monads; continuations; control; state; effects; program   verification; dependent types. - Analysis and Transformation: control-flow; data-flow; abstract   interpretation; partial evaluation; program calculation. - Applications: symbolic computing; formal-methods tools; artificial   intelligence; systems programming; distributed-systems and web   programming; hardware design; databases; XML processing; scientific   and numerical computing; graphical user interfaces; multimedia and   3D graphics programming; scripting; system administration; security. - Education: teaching introductory programming; parallel programming;   mathematical proof; algebra. - Functional Pearls: elegant, instructive, and fun essays on   functional programming. - Experience Reports: short papers that provide evidence that   functional programming really works or describe obstacles that have   kept it from working. If you are concerned about the appropriateness of some topic, do not hesitate to contact the program chair. Abbreviated instructions for authors ------------------------------------ - By Wednesday, March 16 2016, 15:00 (UTC), submit a full paper of at   most 12 pages (6 pages for an Experience Report), in standard   SIGPLAN conference format, including figures but ***excluding   bibliography***. The deadlines will be strictly enforced and papers exceeding the page limits will be summarily rejected. ***ICFP 2016 will employ a lightweight double-blind reviewing process.*** To facilitate this, submitted papers must adhere to two rules:  1. ***author names and institutions must be omitted***, and  2. ***references to authors' own related work should be in the third     person*** (e.g., not "We build on our previous work ..." but     rather "We build on the work of ..."). The purpose of this process is to help the PC and external reviewers come to an initial judgement about the paper without bias, not to make it impossible for them to discover the authors if they were to try. Nothing should be done in the name of anonymity that weakens the submission or makes the job of reviewing the paper more difficult (e.g., important background references should not be omitted or anonymized). In addition, authors should feel free to disseminate their ideas or draft versions of their paper as they normally would. For instance, authors may post drafts of their papers on the web or give talks on their research ideas. We have put together a document answering frequently asked questions that should address many common concerns: http://conf.researchr.org/track/icfp-2016/icfp-2016-papers#Submission-and-Reviewing-FAQ - Authors have the option to attach supplementary material to a   submission, on the understanding that reviewers may choose not to   look at it. The material should be uploaded at submission time, as a   single pdf or a tarball, not via a URL. This supplementary material   may or may not be anonymized; if not anonymized, it will only be   revealed to reviewers after they have submitted their review of your   paper and learned your identity. - Each submission must adhere to SIGPLAN's republication policy, as   explained on the web at:   http://www.sigplan.org/Resources/Policies/Republication - Authors of resubmitted (but previously rejected) papers have the   option to attach an annotated copy of the reviews of their previous   submission(s), explaining how they have addressed these previous   reviews in the present submission. If a reviewer identifies   him/herself as a reviewer of this previous submission and wishes to   see how his/her comments have been addressed, the program chair will   communicate to this reviewer the annotated copy of his/her previous   review. Otherwise, no reviewer will read the annotated copies of the   previous reviews. Overall, a submission will be evaluated according to its relevance, correctness, significance, originality, and clarity. It should explain its contributions in both general and technical terms, clearly identifying what has been accomplished, explaining why it is significant, and comparing it with previous work. The technical content should be accessible to a broad audience. Functional Pearls and Experience Reports are separate categories of papers that need not report original research results and must be marked as such at the time of submission. Detailed guidelines on both categories are given below. Presentations will be videotaped and released online if the presenter consents. The proceedings will be freely available for download from the ACM Digital Library from at least one week before the start of the conference until two weeks after the conference. Formatting: Submissions must be in PDF format printable in black and white on US Letter sized paper and interpretable by Ghostscript. Papers must adhere to the standard SIGPLAN conference format: two columns, nine-point font on a ten-point baseline, with columns 20pc (3.33in) wide and 54pc (9in) tall, with a column gutter of 2pc (0.33in). A suitable document template for LaTeX is available at http://www.sigplan.org/Resources/Author/ Submission: Submissions will be accepted at https://icfp2016.hotcrp.com (in preparation as of December 1). Improved versions of a paper may be submitted at any point before the submission deadline using the same web interface. Author response: Authors will have a 72-hour period, starting at 15:00 UTC on Monday, 2 May, 2016, to read reviews and respond to them. ACM Author-Izer is a unique service that enables ACM authors to generate and post links on either their home page or institutional repository for visitors to download the definitive version of their articles from the ACM Digital Library at no charge. Downloads through Author-Izer links are captured in official ACM statistics, improving the accuracy of usage and impact measurements. Consistently linking the definitive version of ACM article should reduce user confusion over article versioning. After your article has been published and assigned to your ACM Author Profile page, please visit http://www.acm.org/publications/acm-author-izer-service to learn how to create your links for free downloads from the ACM DL. Publication date: The official publication date of accepted papers is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work. Special categories of papers ---------------------------- In addition to research papers, ICFP solicits two kinds of papers that do not require original research contributions: Functional Pearls, which are full papers, and Experience Reports, which are limited to six pages. Authors submitting such papers may wish to consider the following advice. Functional Pearls ================= A Functional Pearl is an elegant essay about something related to functional programming. Examples include, but are not limited to: - a new and thought-provoking way of looking at an old idea - an instructive example of program calculation or proof - a nifty presentation of an old or new data structure - an interesting application of functional programming techniques - a novel use or exposition of functional programming in the classroom While pearls often demonstrate an idea through the development of a short program, there is no requirement or expectation that they do so. Thus, they encompass the notions of theoretical and educational pearls. Functional Pearls are valued as highly and judged as rigorously as ordinary papers, but using somewhat different criteria. In particular, a pearl is not required to report original research, but, it should be concise, instructive, and entertaining. Your pearl is likely to be rejected if your readers get bored, if the material gets too complicated, if too much specialized knowledge is needed, or if the writing is inelegant. The key to writing a good pearl is polishing. A submission you wish to have treated as a pearl must be marked as such on the submission web page, and should contain the words ``Functional Pearl'' somewhere in its title or subtitle. These steps will alert reviewers to use the appropriate evaluation criteria. Pearls will be combined with ordinary papers, however, for the purpose of computing the conference's acceptance rate. Experience Reports ================== The purpose of an Experience Report is to help create a body of published, refereed, citable evidence that functional programming really works -- or to describe what obstacles prevent it from working. Possible topics for an Experience Report include, but are not limited to: - insights gained from real-world projects using functional   programming - comparison of functional programming with conventional programming   in the context of an industrial project or a university curriculum - project-management, business, or legal issues encountered when using   functional programming in a real-world project - curricular issues encountered when using functional programming in   education - real-world constraints that created special challenges for an   implementation of a functional language or for functional   programming in general An Experience Report is distinguished from a normal ICFP paper by its title, by its length, and by the criteria used to evaluate it. - Both in the proceedings and in any citations, the title of each   accepted Experience Report must begin with the words ``Experience   Report'' followed by a colon. The acceptance rate for Experience   Reports will be computed and reported separately from the rate for   ordinary papers. - An Experience Report is at most six pages long. Each accepted   Experience Report will be presented at the conference, but depending   on the number of Experience Reports and regular papers accepted,   authors of Experience reports may be asked to give shorter talks. - Because the purpose of Experience Reports is to enable our community   to accumulate a body of evidence about the efficacy of functional   programming, an acceptable Experience Report need not add to the   body of knowledge of the functional-programming community by   presenting novel results or conclusions. It is sufficient if the   Report states a clear thesis and provides supporting evidence. The   thesis must be relevant to ICFP, but it need not be novel. The program committee will accept or reject Experience Reports based on whether they judge the evidence to be convincing. Anecdotal evidence will be acceptable provided it is well argued and the author explains what efforts were made to gather as much evidence as possible. Typically, more convincing evidence is obtained from papers which show how functional programming was used than from papers which only say that functional programming was used. The most convincing evidence often includes comparisons of situations before and after the introduction or discontinuation of functional programming. Evidence drawn from a single person's experience may be sufficient, but more weight will be given to evidence drawn from the experience of groups of people. An Experience Report should be short and to the point: make a claim about how well functional programming worked on your project and why, and produce evidence to substantiate your claim. If functional programming worked for you in the same ways it has worked for others, you need only to summarize the results?the main part of your paper should discuss how well it worked and in what context. Most readers will not want to know all the details of your project and its implementation, but please characterize your project and its context well enough so that readers can judge to what degree your experience is relevant to their own projects. Be especially careful to highlight any unusual aspects of your project. Also keep in mind that specifics about your project are more valuable than generalities about functional programming; for example, it is more valuable to say that your team delivered its software a month ahead of schedule than it is to say that functional programming made your team more productive. If your paper not only describes experience but also presents new technical results, or if your experience refutes cherished beliefs of the functional-programming community, you may be better off submitting it as a full paper, which will be judged by the usual criteria of novelty, originality, and relevance. If you are unsure in which category to submit, the program chair will be happy to help you decide. Organizers ---------- General Co-Chairs: Jacques Garrigue (Nagoya University) Gabriele Keller (University of New South Wales) Program Chair: Eijiro Sumii (Tohoku University) Program Committee: Koen Claessen (Chalmers University of Technology) Joshua Dunfield (University of British Columbia, Canada) Matthew Fluet (Rochester Institute of Technology) Nate Foster (Cornell University) Dan Grossman (University of Washington, USA) Jurriaan Hage (Utrecht University) Roman Leshchinskiy (Standard Chartered Bank) Keisuke Nakano (The University of Electro-Communications) Aleksandar Nanevski (IMDEA Software Institute) Scott Owens (University of Kent) Sungwoo Park (Pohang University of Science and Technology) Amr Sabry (Indiana University) Tom Schrijvers (KU Leuven) Olin Shivers (Northeastern University) Walid Taha (Halmstad University) Dimitrios Vytiniotis (Microsoft Research, Cambridge) David Walker (Princeton University) Nobuko Yoshida (Imperial College London, UK) External Review Committee to be announced. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok@REDACTED Mon Dec 7 05:45:03 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Mon, 7 Dec 2015 17:45:03 +1300 Subject: [erlang-questions] Updates, lenses, and why cross-module inlining would be nice In-Reply-To: <565FEB28.4090301@gmail.com> References: <1448544949.1214768.450737385.156BF7D2@webmail.messagingengine.com> <5657EA88.1010901@gmail.com> <1462F19A-19B6-4A16-A26D-4643BC0830CB@cs.otago.ac.nz> <565CA6AC.9020306@gmail.com> <32BC6909-E775-48B0-98EB-D89716098F61@cs.otago.ac.nz> <565E8D7F.5030106@gmail.com> <0AB5C64B-11D3-46A5-BB6A-AF275C5D7E08@cs.otago.ac.nz> <565FEB28.4090301@gmail.com> Message-ID: <8216D951-C321-48B2-ADFC-FA0D2D5B7D9E@cs.otago.ac.nz> On 3/12/2015, at 8:11 pm, Michael Truog wrote: > If we had header files that were only allowed to contain functions and > types that we called "template files" for lack of a better name > (ideally, they would have their own export statements to help > distinguish between private functions/types and the interface for modules > to use) > AND > we had the include files (and template files) versioned within the beam > output (to address the untracked dependency problem). > > Wouldn't that approach be preferable when compared > to trying to manage the module dependency graph during a > runtime code upgrades? Why would the "template files" approach > not be sufficient? These are the differences I can see between 'template files' and 'import_static modules'. (1) 'template files' would not be modules. Having their own export directives would make them modul*ar*, but they could not use 'fun M:F/A' to refer to their own functions, having no M they could use. This does not seem like a good thing. (2) Headers can be nested. If 'template files' were to be like headers, this would create nested scopes for top level functions, and the possibility of multiple distinct functions with the same name and arity and in some sense "in" the very same module. I don't see that as an insuperable problem, but it's a very big change to the language in order to avoid what is really not likely to be a practical problem. (3) 'template files' are copied into the including module, which allows different modules to have included different versions of a 'template file'. Like headers, this DOES NOT SOLVE the dependency and version skew problems, IT CREATES THOSE PROBLEMS. So with 'template files', either you have precisely the same problems you have with headers plus a whole lot of extra complexity, or you have to track dependencies anyway. If we can take a step away from Erlang, we can see that we have a fundamental problem. Resource A is prepared from foundations x and c. Resource B is prepared from foundations y and c. Resources A and B have to "fit together" in some fashion. Common foundation c has something to do with how that fitting together works. c is revised to c'. A is re-prepared to A'. If B were re-prepared, B' and A' would be compatible, just as B and A were compatible. But B and A' are not compatible. As far as I can see, there are three general ways to deal with this kind of problem. 1. Detection. When you try to use A' and B together, detect that they were prepared from c' and c and refuse to allow this use. This requires dependency tracking. 2. Prevention. When c is revised to c', use dependencies forward and queue A and B for rebuilding. This requires dependency tracking. 3. Avoidance. Make the preparation step fairly trivial so that whenever A (B) makes references to c, the latest version of c is used. In programming language terms, this is not even lazy evaluation, it's call-by-name. It's the way Erlang currently handles remote calls. (More or less.) As a rule of thumb, early binding is the route to efficiency (and early error detection), late binding is the route to flexibility (and late error detection). The performance cost may be anywhere between slight and scary depending on the application. 3'. It is only necessary to provide the *appearance* of late binding. I have a faint and probably unreliable memory that the SPITBOL implementation of SNOBOL4 could generate "optimised" code but could back it out and recompile it when the assumptions it had made turned out to be wrong. Amongst other things, this requires the system to keep track of this-depends-on-that at run time, so it's still dependency tracking, but it need not be visible to the programmer. What might be noticeable would be a performance blip as loading c' caused everything that had been run-time compiled against c to be de-optimised. TANSTAAFL. From icfp.publicity@REDACTED Mon Dec 7 09:58:35 2015 From: icfp.publicity@REDACTED (Lindsey Kuper) Date: Mon, 7 Dec 2015 00:58:35 -0800 Subject: [erlang-questions] ICFP 2016 Call for Papers In-Reply-To: <001a113f3cfe47a3270526327fd2@google.com> References: <001a113f3cfe47a3270526327fd2@google.com> Message-ID: [My apologies for the garbled text in a previous version of this email. -- Lindsey] ICFP 2016 The 21st ACM SIGPLAN International Conference on Functional Programming http://conf.researchr.org/home/icfp-2016 Call for Papers Important dates --------------- Submissions due: Wednesday, March 16 2016, 15:00 (UTC) https://icfp2016.hotcrp.com (in preparation as of December 1) Author response: Monday, 2 May, 2016, 15:00 (UTC) - Thursday, 5 May, 2016, 15:00 (UTC) Notification: Friday, 20 May, 2016 Final copy due: TBA Early registration: TBA Conference: Tuesday, 20 September - Thursday, 22 September, 2016 Scope ----- ICFP 2016 seeks original papers on the art and science of functional programming. Submissions are invited on all topics from principles to practice, from foundations to features, and from abstraction to application. The scope includes all languages that encourage functional programming, including both purely applicative and imperative languages, as well as languages with objects, concurrency, or parallelism. Topics of interest include (but are not limited to): - Language Design: concurrency, parallelism, and distribution; modules; components and composition; metaprogramming; type systems; interoperability; domain-specific languages; and relations to imperative, object-oriented, or logic programming. - Implementation: abstract machines; virtual machines; interpretation; compilation; compile-time and run-time optimization; garbage collection and memory management; multi-threading; exploiting parallel hardware; interfaces to foreign functions, services, components, or low-level machine resources. - Software-Development Techniques: algorithms and data structures; design patterns; specification; verification; validation; proof assistants; debugging; testing; tracing; profiling. - Foundations: formal semantics; lambda calculus; rewriting; type theory; monads; continuations; control; state; effects; program verification; dependent types. - Analysis and Transformation: control-flow; data-flow; abstract interpretation; partial evaluation; program calculation. - Applications: symbolic computing; formal-methods tools; artificial intelligence; systems programming; distributed-systems and web programming; hardware design; databases; XML processing; scientific and numerical computing; graphical user interfaces; multimedia and 3D graphics programming; scripting; system administration; security. - Education: teaching introductory programming; parallel programming; mathematical proof; algebra. - Functional Pearls: elegant, instructive, and fun essays on functional programming. - Experience Reports: short papers that provide evidence that functional programming really works or describe obstacles that have kept it from working. If you are concerned about the appropriateness of some topic, do not hesitate to contact the program chair. Abbreviated instructions for authors ------------------------------------ - By Wednesday, March 16 2016, 15:00 (UTC), submit a full paper of at most 12 pages (6 pages for an Experience Report), in standard SIGPLAN conference format, including figures but ***excluding bibliography***. The deadlines will be strictly enforced and papers exceeding the page limits will be summarily rejected. ***ICFP 2016 will employ a lightweight double-blind reviewing process.*** To facilitate this, submitted papers must adhere to two rules: 1. ***author names and institutions must be omitted***, and 2. ***references to authors' own related work should be in the third person*** (e.g., not "We build on our previous work ..." but rather "We build on the work of ..."). The purpose of this process is to help the PC and external reviewers come to an initial judgement about the paper without bias, not to make it impossible for them to discover the authors if they were to try. Nothing should be done in the name of anonymity that weakens the submission or makes the job of reviewing the paper more difficult (e.g., important background references should not be omitted or anonymized). In addition, authors should feel free to disseminate their ideas or draft versions of their paper as they normally would. For instance, authors may post drafts of their papers on the web or give talks on their research ideas. We have put together a document answering frequently asked questions that should address many common concerns: http://conf.researchr.org/track/icfp-2016/icfp-2016-papers#Submission-and-Reviewing-FAQ - Authors have the option to attach supplementary material to a submission, on the understanding that reviewers may choose not to look at it. The material should be uploaded at submission time, as a single pdf or a tarball, not via a URL. This supplementary material may or may not be anonymized; if not anonymized, it will only be revealed to reviewers after they have submitted their review of your paper and learned your identity. - Each submission must adhere to SIGPLAN's republication policy, as explained on the web at: http://www.sigplan.org/Resources/Policies/Republication - Authors of resubmitted (but previously rejected) papers have the option to attach an annotated copy of the reviews of their previous submission(s), explaining how they have addressed these previous reviews in the present submission. If a reviewer identifies him/herself as a reviewer of this previous submission and wishes to see how his/her comments have been addressed, the program chair will communicate to this reviewer the annotated copy of his/her previous review. Otherwise, no reviewer will read the annotated copies of the previous reviews. Overall, a submission will be evaluated according to its relevance, correctness, significance, originality, and clarity. It should explain its contributions in both general and technical terms, clearly identifying what has been accomplished, explaining why it is significant, and comparing it with previous work. The technical content should be accessible to a broad audience. Functional Pearls and Experience Reports are separate categories of papers that need not report original research results and must be marked as such at the time of submission. Detailed guidelines on both categories are given below. Presentations will be videotaped and released online if the presenter consents. The proceedings will be freely available for download from the ACM Digital Library from at least one week before the start of the conference until two weeks after the conference. Formatting: Submissions must be in PDF format printable in black and white on US Letter sized paper and interpretable by Ghostscript. Papers must adhere to the standard SIGPLAN conference format: two columns, nine-point font on a ten-point baseline, with columns 20pc (3.33in) wide and 54pc (9in) tall, with a column gutter of 2pc (0.33in). A suitable document template for LaTeX is available at http://www.sigplan.org/Resources/Author/ Submission: Submissions will be accepted at https://icfp2016.hotcrp.com (in preparation as of December 1). Improved versions of a paper may be submitted at any point before the submission deadline using the same web interface. Author response: Authors will have a 72-hour period, starting at 15:00 UTC on Monday, 2 May, 2016, to read reviews and respond to them. ACM Author-Izer is a unique service that enables ACM authors to generate and post links on either their home page or institutional repository for visitors to download the definitive version of their articles from the ACM Digital Library at no charge. Downloads through Author-Izer links are captured in official ACM statistics, improving the accuracy of usage and impact measurements. Consistently linking the definitive version of ACM article should reduce user confusion over article versioning. After your article has been published and assigned to your ACM Author Profile page, please visit http://www.acm.org/publications/acm-author-izer-service to learn how to create your links for free downloads from the ACM DL. Publication date: The official publication date of accepted papers is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work. Special categories of papers ---------------------------- In addition to research papers, ICFP solicits two kinds of papers that do not require original research contributions: Functional Pearls, which are full papers, and Experience Reports, which are limited to six pages. Authors submitting such papers may wish to consider the following advice. Functional Pearls ================= A Functional Pearl is an elegant essay about something related to functional programming. Examples include, but are not limited to: - a new and thought-provoking way of looking at an old idea - an instructive example of program calculation or proof - a nifty presentation of an old or new data structure - an interesting application of functional programming techniques - a novel use or exposition of functional programming in the classroom While pearls often demonstrate an idea through the development of a short program, there is no requirement or expectation that they do so. Thus, they encompass the notions of theoretical and educational pearls. Functional Pearls are valued as highly and judged as rigorously as ordinary papers, but using somewhat different criteria. In particular, a pearl is not required to report original research, but, it should be concise, instructive, and entertaining. Your pearl is likely to be rejected if your readers get bored, if the material gets too complicated, if too much specialized knowledge is needed, or if the writing is inelegant. The key to writing a good pearl is polishing. A submission you wish to have treated as a pearl must be marked as such on the submission web page, and should contain the words ``Functional Pearl'' somewhere in its title or subtitle. These steps will alert reviewers to use the appropriate evaluation criteria. Pearls will be combined with ordinary papers, however, for the purpose of computing the conference's acceptance rate. Experience Reports ================== The purpose of an Experience Report is to help create a body of published, refereed, citable evidence that functional programming really works -- or to describe what obstacles prevent it from working. Possible topics for an Experience Report include, but are not limited to: - insights gained from real-world projects using functional programming - comparison of functional programming with conventional programming in the context of an industrial project or a university curriculum - project-management, business, or legal issues encountered when using functional programming in a real-world project - curricular issues encountered when using functional programming in education - real-world constraints that created special challenges for an implementation of a functional language or for functional programming in general An Experience Report is distinguished from a normal ICFP paper by its title, by its length, and by the criteria used to evaluate it. - Both in the proceedings and in any citations, the title of each accepted Experience Report must begin with the words ``Experience Report'' followed by a colon. The acceptance rate for Experience Reports will be computed and reported separately from the rate for ordinary papers. - An Experience Report is at most six pages long. Each accepted Experience Report will be presented at the conference, but depending on the number of Experience Reports and regular papers accepted, authors of Experience reports may be asked to give shorter talks. - Because the purpose of Experience Reports is to enable our community to accumulate a body of evidence about the efficacy of functional programming, an acceptable Experience Report need not add to the body of knowledge of the functional-programming community by presenting novel results or conclusions. It is sufficient if the Report states a clear thesis and provides supporting evidence. The thesis must be relevant to ICFP, but it need not be novel. The program committee will accept or reject Experience Reports based on whether they judge the evidence to be convincing. Anecdotal evidence will be acceptable provided it is well argued and the author explains what efforts were made to gather as much evidence as possible. Typically, more convincing evidence is obtained from papers which show how functional programming was used than from papers which only say that functional programming was used. The most convincing evidence often includes comparisons of situations before and after the introduction or discontinuation of functional programming. Evidence drawn from a single person's experience may be sufficient, but more weight will be given to evidence drawn from the experience of groups of people. An Experience Report should be short and to the point: make a claim about how well functional programming worked on your project and why, and produce evidence to substantiate your claim. If functional programming worked for you in the same ways it has worked for others, you need only to summarize the results?the main part of your paper should discuss how well it worked and in what context. Most readers will not want to know all the details of your project and its implementation, but please characterize your project and its context well enough so that readers can judge to what degree your experience is relevant to their own projects. Be especially careful to highlight any unusual aspects of your project. Also keep in mind that specifics about your project are more valuable than generalities about functional programming; for example, it is more valuable to say that your team delivered its software a month ahead of schedule than it is to say that functional programming made your team more productive. If your paper not only describes experience but also presents new technical results, or if your experience refutes cherished beliefs of the functional-programming community, you may be better off submitting it as a full paper, which will be judged by the usual criteria of novelty, originality, and relevance. If you are unsure in which category to submit, the program chair will be happy to help you decide. Organizers ---------- General Co-Chairs: Jacques Garrigue (Nagoya University) Gabriele Keller (University of New South Wales) Program Chair: Eijiro Sumii (Tohoku University) Program Committee: Koen Claessen (Chalmers University of Technology) Joshua Dunfield (University of British Columbia, Canada) Matthew Fluet (Rochester Institute of Technology) Nate Foster (Cornell University) Dan Grossman (University of Washington, USA) Jurriaan Hage (Utrecht University) Roman Leshchinskiy (Standard Chartered Bank) Keisuke Nakano (The University of Electro-Communications) Aleksandar Nanevski (IMDEA Software Institute) Scott Owens (University of Kent) Sungwoo Park (Pohang University of Science and Technology) Amr Sabry (Indiana University) Tom Schrijvers (KU Leuven) Olin Shivers (Northeastern University) Walid Taha (Halmstad University) Dimitrios Vytiniotis (Microsoft Research, Cambridge) David Walker (Princeton University) Nobuko Yoshida (Imperial College London, UK) External Review Committee to be announced. From evnix.com@REDACTED Mon Dec 7 10:03:44 2015 From: evnix.com@REDACTED (avinash D'silva) Date: Mon, 7 Dec 2015 14:33:44 +0530 Subject: [erlang-questions] how to better organize code for a website Message-ID: Hi, I am using cowboy+cowboy_session for building websites. each route has something similar as shown below: -module(get_pageX). -behaviour(cowboy_http_handler). -export([init/3]). -export([handle/2]). -export([terminate/3]). -record(state, { }). init(_, Req, _Opts) -> {ok, Req, #state{}}. handle(Req, State=#state{}) -> case cowboy_session:get("loggedin", Req) of true -> #do something here _ -> #not logged in. end, {ok, ReqFinal} = cowboy_req:reply(200, [{<<"content-type">>, <<"text/html">>}], <<"Some response">>, Req), {ok, ReqFinal, State}. terminate(_Reason, _Req, _State) -> ok. The above CASE(code) is repeated for every route to check if the user is logged in, is there a better way to organize this? what I am looking for is a way to check if the user is logged in and if not, redirect to login route. PS: I am new to Erlang Regards, Avinash D' Silva -------------- next part -------------- An HTML attachment was scrubbed... URL: From grahamrhay@REDACTED Mon Dec 7 10:38:46 2015 From: grahamrhay@REDACTED (Graham Hay) Date: Mon, 7 Dec 2015 09:38:46 +0000 Subject: [erlang-questions] how to better organize code for a website In-Reply-To: References: Message-ID: You might be able to do something with a cowboy middleware: http://ninenines.eu/docs/en/cowboy/HEAD/guide/middlewares/ Or, failing that, you could just extract that case to a module somewhere, and pass in "do something here" as a fun. On 7 December 2015 at 09:03, avinash D'silva wrote: > Hi, > > > I am using cowboy+cowboy_session for building websites. > > each route has something similar as shown below: > > > -module(get_pageX). > -behaviour(cowboy_http_handler). > > -export([init/3]). > -export([handle/2]). > -export([terminate/3]). > > -record(state, { > }). > > init(_, Req, _Opts) -> > {ok, Req, #state{}}. > > handle(Req, State=#state{}) -> > > > > case cowboy_session:get("loggedin", Req) of > > true -> > #do something here > > _ -> > #not logged in. > > end, > > > > {ok, ReqFinal} = cowboy_req:reply(200, > [{<<"content-type">>, <<"text/html">>}], > <<"Some response">>, > Req), > > {ok, ReqFinal, State}. > > terminate(_Reason, _Req, _State) -> > ok. > > > The above CASE(code) is repeated for every route to check if the user is > logged in, is there a better way to organize this? > > what I am looking for is a way to check if the user is logged in and if > not, redirect to login route. > > > PS: I am new to Erlang > > > Regards, > Avinash D' Silva > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From evnix.com@REDACTED Mon Dec 7 11:39:07 2015 From: evnix.com@REDACTED (avinash D'silva) Date: Mon, 7 Dec 2015 16:09:07 +0530 Subject: [erlang-questions] how to better organize code for a website In-Reply-To: References: Message-ID: wow!! thanks! It worked perfectly. is there a way to assign the middleware to specific routes? currently all the public routes are blocked due to the login middleware. On Mon, Dec 7, 2015 at 3:08 PM, Graham Hay wrote: > You might be able to do something with a cowboy middleware: > > http://ninenines.eu/docs/en/cowboy/HEAD/guide/middlewares/ > > Or, failing that, you could just extract that case to a module somewhere, > and pass in "do something here" as a fun. > > On 7 December 2015 at 09:03, avinash D'silva wrote: > >> Hi, >> >> >> I am using cowboy+cowboy_session for building websites. >> >> each route has something similar as shown below: >> >> >> -module(get_pageX). >> -behaviour(cowboy_http_handler). >> >> -export([init/3]). >> -export([handle/2]). >> -export([terminate/3]). >> >> -record(state, { >> }). >> >> init(_, Req, _Opts) -> >> {ok, Req, #state{}}. >> >> handle(Req, State=#state{}) -> >> >> >> >> case cowboy_session:get("loggedin", Req) of >> >> true -> >> #do something here >> >> _ -> >> #not logged in. >> >> end, >> >> >> >> {ok, ReqFinal} = cowboy_req:reply(200, >> [{<<"content-type">>, <<"text/html">>}], >> <<"Some response">>, >> Req), >> >> {ok, ReqFinal, State}. >> >> terminate(_Reason, _Req, _State) -> >> ok. >> >> >> The above CASE(code) is repeated for every route to check if the user is >> logged in, is there a better way to organize this? >> >> what I am looking for is a way to check if the user is logged in and if >> not, redirect to login route. >> >> >> PS: I am new to Erlang >> >> >> Regards, >> Avinash D' Silva >> >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > -- Powered By codologic -------------- next part -------------- An HTML attachment was scrubbed... URL: From grahamrhay@REDACTED Mon Dec 7 11:50:58 2015 From: grahamrhay@REDACTED (Graham Hay) Date: Mon, 7 Dec 2015 10:50:58 +0000 Subject: [erlang-questions] how to better organize code for a website In-Reply-To: References: Message-ID: No, I don't think so, it's not quite as flexible as something like Express. But you can check the Req path in your middleware. On 7 December 2015 at 10:39, avinash D'silva wrote: > wow!! thanks! > > It worked perfectly. > > > is there a way to assign the middleware to specific routes? currently all > the public routes are blocked due to the login middleware. > > > > > On Mon, Dec 7, 2015 at 3:08 PM, Graham Hay wrote: > >> You might be able to do something with a cowboy middleware: >> >> http://ninenines.eu/docs/en/cowboy/HEAD/guide/middlewares/ >> >> Or, failing that, you could just extract that case to a module somewhere, >> and pass in "do something here" as a fun. >> >> On 7 December 2015 at 09:03, avinash D'silva wrote: >> >>> Hi, >>> >>> >>> I am using cowboy+cowboy_session for building websites. >>> >>> each route has something similar as shown below: >>> >>> >>> -module(get_pageX). >>> -behaviour(cowboy_http_handler). >>> >>> -export([init/3]). >>> -export([handle/2]). >>> -export([terminate/3]). >>> >>> -record(state, { >>> }). >>> >>> init(_, Req, _Opts) -> >>> {ok, Req, #state{}}. >>> >>> handle(Req, State=#state{}) -> >>> >>> >>> >>> case cowboy_session:get("loggedin", Req) of >>> >>> true -> >>> #do something here >>> >>> _ -> >>> #not logged in. >>> >>> end, >>> >>> >>> >>> {ok, ReqFinal} = cowboy_req:reply(200, >>> [{<<"content-type">>, <<"text/html">>}], >>> <<"Some response">>, >>> Req), >>> >>> {ok, ReqFinal, State}. >>> >>> terminate(_Reason, _Req, _State) -> >>> ok. >>> >>> >>> The above CASE(code) is repeated for every route to check if the user is >>> logged in, is there a better way to organize this? >>> >>> what I am looking for is a way to check if the user is logged in and if >>> not, redirect to login route. >>> >>> >>> PS: I am new to Erlang >>> >>> >>> Regards, >>> Avinash D' Silva >>> >>> >>> >>> _______________________________________________ >>> erlang-questions mailing list >>> erlang-questions@REDACTED >>> http://erlang.org/mailman/listinfo/erlang-questions >>> >>> >> > > > -- > Powered By codologic > -------------- next part -------------- An HTML attachment was scrubbed... URL: From llbgurs@REDACTED Mon Dec 7 11:52:17 2015 From: llbgurs@REDACTED (linbo liao) Date: Mon, 7 Dec 2015 18:52:17 +0800 Subject: [erlang-questions] Possilbe Erlang memory fragmentation Message-ID: Our Erlang server looks have a serious memory leak, the VM memory usage is low but top is high. *# Env* Erlang: R16B02 OS: Ubuntu 12.04.5 LTS \n \l X86_64 *# Server Architecture* 1. RabbitMQ client consumer Message 2. MQ client cast message to gen_server receiver 3. receiver cast message to worker pool (managed by poolboy) *# Erlang VM* > erlang:memory(). > [{total,424544992}, > {processes,293961840}, > {processes_used,293937232}, > {system,130583152}, > {atom,553569}, > {atom_used,521929}, > {binary,9794704}, > {code,14041920}, > {ets,5632280}] > *# Allocated Memory* > recon_alloc:memory(allocated). > 2570059776 > > recon_alloc:memory(allocated_types). > [{binary_alloc,163577856}, > {driver_alloc,11010048}, > {eheap_alloc,2165309440}, > {ets_alloc,11010048}, > {fix_alloc,50855936}, > {ll_alloc,156237824}, > {sl_alloc,2097152}, > {std_alloc,6815744}, > {temp_alloc,3145728}] > *# allocate binary* > recon:bin_leak(5). > [{<0.440.0>,-769, > [{current_function,{gen_server,loop,6}}, > {initial_call,{proc_lib,init_p,5}}]}, > {<0.446.0>,-230, > [{current_function,{gen_server,loop,6}}, > {initial_call,{proc_lib,init_p,5}}]}, > {<0.450.0>,-179, > [{current_function,{gen_server,loop,6}}, > {initial_call,{proc_lib,init_p,5}}]}, > {<0.12497.0>,-147, > [{current_function,{gen,do_call,4}}, > {initial_call,{proc_lib,init_p,5}}]}, > {<0.434.0>,-145, > [{current_function,{cberl_worker,mget,4}}, > {initial_call,{proc_lib,init_p,5}}]}] > *# Do garbage* > 7> erlang:garbage_collect(). > true > 8> erlang:memory(). > [{total,381782256}, > {processes,251371752}, > {processes_used,251361352}, > {system,130410504}, > {atom,553569}, > {atom_used,521929}, > {binary,9230384}, > {code,14041920}, > {ets,5675528}] > > recon_alloc:memory(allocated_types). > [{binary_alloc,150994944}, > {driver_alloc,11010048}, > {eheap_alloc,2154823680}, > {ets_alloc,11010048}, > {fix_alloc,50855936}, > {ll_alloc,156237824}, > {sl_alloc,2097152}, > {std_alloc,6815744}, > {temp_alloc,3145728}] > *# Fragmentation* I execute recon_alloc:fragmentation(current) and recon_alloc:fragmentation(max), find some allocator current usage is lower than max usage. *## Current usage* > {{binary_alloc,0}, > [{sbcs_usage,1.0}, > {mbcs_usage,0.037804497612847224}, > {sbcs_block_size,0}, > {sbcs_carriers_size,0}, > {mbcs_block_size,178384}, > {mbcs_carriers_size,4718592}]}, > {{binary_alloc,2}, > [{sbcs_usage,2.0}, > {mbcs_usage,0.05326200786389803}, > {sbcs_block_size,0}, > {sbcs_carriers_size,0}, > {mbcs_block_size,4775112}, > {mbcs_carriers_size,89653248}]}, > {{binary_alloc,1}, > [{sbcs_usage,2.0}, > {mbcs_usage,0.0643930146188447}, > {sbcs_block_size,0}, > {sbcs_carriers_size,0}, > {mbcs_block_size,4456384}, > {mbcs_carriers_size,69206016}]}, > *## Max usage* {{binary_alloc,0}, > [{sbcs_usage,1.0}, > {mbcs_usage,0.7732696533203125}, > {sbcs_block_size,0}, > {sbcs_carriers_size,0}, > {mbcs_block_size,24324960}, > {mbcs_carriers_size,31457280}]}, > {{binary_alloc,2}, > [{sbcs_usage,1.0}, > {mbcs_usage,0.938345729714573}, > {sbcs_block_size,0}, > {sbcs_carriers_size,0}, > {mbcs_block_size,149064912}, > {mbcs_carriers_size,158859264}]}, > {{binary_alloc,0}, > [{sbcs_usage,1.0}, > {mbcs_usage,0.7732696533203125}, > {sbcs_block_size,0}, > {sbcs_carriers_size,0}, > {mbcs_block_size,24324960}, > {mbcs_carriers_size,31457280}]}, > Does it mean Erlang server have lots of Memory fragmentation since MQ binary message pass through multi gen_server? How can I move on for this issue? Thanks, Linbo -------------- next part -------------- An HTML attachment was scrubbed... URL: From kansi13@REDACTED Mon Dec 7 10:13:05 2015 From: kansi13@REDACTED (Vanshdeep Singh) Date: Mon, 7 Dec 2015 14:43:05 +0530 Subject: [erlang-questions] how to better organize code for a website In-Reply-To: References: Message-ID: Hey, I think you can use cowboy middleware https://github.com/ninenines/cowboy/tree/master/examples/markdown_middleware or you can also use on_request option in cowboy:start_http() Regards, Vansh On Mon, Dec 7, 2015 at 2:33 PM, avinash D'silva wrote: > Hi, > > > I am using cowboy+cowboy_session for building websites. > > each route has something similar as shown below: > > > -module(get_pageX). > -behaviour(cowboy_http_handler). > > -export([init/3]). > -export([handle/2]). > -export([terminate/3]). > > -record(state, { > }). > > init(_, Req, _Opts) -> > {ok, Req, #state{}}. > > handle(Req, State=#state{}) -> > > > > case cowboy_session:get("loggedin", Req) of > > true -> > #do something here > > _ -> > #not logged in. > > end, > > > > {ok, ReqFinal} = cowboy_req:reply(200, > [{<<"content-type">>, <<"text/html">>}], > <<"Some response">>, > Req), > > {ok, ReqFinal, State}. > > terminate(_Reason, _Req, _State) -> > ok. > > > The above CASE(code) is repeated for every route to check if the user is > logged in, is there a better way to organize this? > > what I am looking for is a way to check if the user is logged in and if > not, redirect to login route. > > > PS: I am new to Erlang > > > Regards, > Avinash D' Silva > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From evnix.com@REDACTED Mon Dec 7 13:04:35 2015 From: evnix.com@REDACTED (avinash D'silva) Date: Mon, 7 Dec 2015 17:34:35 +0530 Subject: [erlang-questions] how to better organize code for a website In-Reply-To: References: Message-ID: ok, got it! I find doing everything so much more natural in Erlang when compared to Node/Express. Thanks for all the help. On Mon, Dec 7, 2015 at 4:20 PM, Graham Hay wrote: > No, I don't think so, it's not quite as flexible as something like > Express. But you can check the Req path in your middleware. > > On 7 December 2015 at 10:39, avinash D'silva wrote: > >> wow!! thanks! >> >> It worked perfectly. >> >> >> is there a way to assign the middleware to specific routes? currently >> all the public routes are blocked due to the login middleware. >> >> >> >> >> On Mon, Dec 7, 2015 at 3:08 PM, Graham Hay wrote: >> >>> You might be able to do something with a cowboy middleware: >>> >>> http://ninenines.eu/docs/en/cowboy/HEAD/guide/middlewares/ >>> >>> Or, failing that, you could just extract that case to a module somewhere, >>> and pass in "do something here" as a fun. >>> >>> On 7 December 2015 at 09:03, avinash D'silva >>> wrote: >>> >>>> Hi, >>>> >>>> >>>> I am using cowboy+cowboy_session for building websites. >>>> >>>> each route has something similar as shown below: >>>> >>>> >>>> -module(get_pageX). >>>> -behaviour(cowboy_http_handler). >>>> >>>> -export([init/3]). >>>> -export([handle/2]). >>>> -export([terminate/3]). >>>> >>>> -record(state, { >>>> }). >>>> >>>> init(_, Req, _Opts) -> >>>> {ok, Req, #state{}}. >>>> >>>> handle(Req, State=#state{}) -> >>>> >>>> >>>> >>>> case cowboy_session:get("loggedin", Req) of >>>> >>>> true -> >>>> #do something here >>>> >>>> _ -> >>>> #not logged in. >>>> >>>> end, >>>> >>>> >>>> >>>> {ok, ReqFinal} = cowboy_req:reply(200, >>>> [{<<"content-type">>, <<"text/html">>}], >>>> <<"Some response">>, >>>> Req), >>>> >>>> {ok, ReqFinal, State}. >>>> >>>> terminate(_Reason, _Req, _State) -> >>>> ok. >>>> >>>> >>>> The above CASE(code) is repeated for every route to check if the user >>>> is logged in, is there a better way to organize this? >>>> >>>> what I am looking for is a way to check if the user is logged in and if >>>> not, redirect to login route. >>>> >>>> >>>> PS: I am new to Erlang >>>> >>>> >>>> Regards, >>>> Avinash D' Silva >>>> >>>> >>>> >>>> _______________________________________________ >>>> erlang-questions mailing list >>>> erlang-questions@REDACTED >>>> http://erlang.org/mailman/listinfo/erlang-questions >>>> >>>> >>> >> >> >> -- >> Powered By codologic >> > > -- Powered By codologic -------------- next part -------------- An HTML attachment was scrubbed... URL: From henrik.x.nord@REDACTED Mon Dec 7 15:50:40 2015 From: henrik.x.nord@REDACTED (Henrik Nord X) Date: Mon, 7 Dec 2015 15:50:40 +0100 Subject: [erlang-questions] Patch package OTP 17.5.6.6 released Message-ID: <56659CC0.7090502@ericsson.com> Patch Package: OTP 17.5.6.6 Git Tag: OTP-17.5.6.6 Date: 2015-12-07 Trouble Report Id: OTP-13150 Seq num: System: OTP Release: 17 Application: erts-6.4.1.5 Predecessor: OTP 17.5.6.5 Check out the git tag OTP-17.5.6.6, and build a full OTP system including documentation. Apply one or more applications from this build as patches to your installation using the 'otp_patch_apply' tool. For information on install requirements, see descriptions for each application version below. --------------------------------------------------------------------- --- erts-6.4.1.5 ---------------------------------------------------- --------------------------------------------------------------------- The erts-6.4.1.5 application can be applied independently of other applications on a full OTP 17 installation. --- Fixed Bugs and Malfunctions --- OTP-13150 Application(s): erts Fixed a bug that could cause a crash dump to become almost empty. Full runtime dependencies of erts-6.4.1.5: kernel-3.0, sasl-2.4, stdlib-2.0 --------------------------------------------------------------------- --------------------------------------------------------------------- --------------------------------------------------------------------- From mononcqc@REDACTED Mon Dec 7 20:30:31 2015 From: mononcqc@REDACTED (Fred Hebert) Date: Mon, 7 Dec 2015 14:30:31 -0500 Subject: [erlang-questions] Possilbe Erlang memory fragmentation In-Reply-To: References: Message-ID: <20151207193029.GE886@fhebert-ltm1> On 12/07, linbo liao wrote: >Our Erlang server looks have a serious memory leak, the VM memory usage is >low but top is high. > >*# Fragmentation* > >I execute recon_alloc:fragmentation(current) and >recon_alloc:fragmentation(max), find some allocator current usage is lower >than max usage. > >*## Current usage* > >> {{binary_alloc,0}, >> [{sbcs_usage,1.0}, >> {mbcs_usage,0.037804497612847224}, >> {sbcs_block_size,0}, >> {sbcs_carriers_size,0}, >> {mbcs_block_size,178384}, >> {mbcs_carriers_size,4718592}]}, >> {{binary_alloc,2}, >> [{sbcs_usage,2.0}, >> {mbcs_usage,0.05326200786389803}, >> {sbcs_block_size,0}, >> {sbcs_carriers_size,0}, >> {mbcs_block_size,4775112}, >> {mbcs_carriers_size,89653248}]}, >> {{binary_alloc,1}, >> [{sbcs_usage,2.0}, >> {mbcs_usage,0.0643930146188447}, >> {sbcs_block_size,0}, >> {sbcs_carriers_size,0}, >> {mbcs_block_size,4456384}, >> {mbcs_carriers_size,69206016}]}, >> Yeah, those are very, very low usage values on mbcs (<5%). Can you give information such as the average block size (also in recon_alloc), how long the node has been running, and so on? All values for current and max are useful for these. Also if you could provide your allocator strategy that could be useful. From vances@REDACTED Tue Dec 8 05:16:09 2015 From: vances@REDACTED (Vance Shipley) Date: Tue, 8 Dec 2015 09:46:09 +0530 Subject: [erlang-questions] Thank you for 17 years of Erlang/OTP Message-ID: It was seventeen years ago today that Erlang/OTP was released as open source. On this occasion I offer my heartfelt thanks to the OTP team for their fantastic contribution and support to the community. http://web.archive.org/web/19991009002753/www.erlang.se/onlinenews/ErlangOTPos.shtml -------------- next part -------------- An HTML attachment was scrubbed... URL: From llbgurs@REDACTED Tue Dec 8 05:16:35 2015 From: llbgurs@REDACTED (linbo liao) Date: Tue, 8 Dec 2015 12:16:35 +0800 Subject: [erlang-questions] Possilbe Erlang memory fragmentation In-Reply-To: <20151207193029.GE886@fhebert-ltm1> References: <20151207193029.GE886@fhebert-ltm1> Message-ID: Thanks Fred. I restart the node due to high memory usage, today everything looks fine. Will update the data if it happen again. Thanks, Linbo 2015-12-08 3:30 GMT+08:00 Fred Hebert : > On 12/07, linbo liao wrote: > >> Our Erlang server looks have a serious memory leak, the VM memory usage is >> low but top is high. >> >> *# Fragmentation* >> >> I execute recon_alloc:fragmentation(current) and >> recon_alloc:fragmentation(max), find some allocator current usage is lower >> than max usage. >> >> *## Current usage* >> >> {{binary_alloc,0}, >>> [{sbcs_usage,1.0}, >>> {mbcs_usage,0.037804497612847224}, >>> {sbcs_block_size,0}, >>> {sbcs_carriers_size,0}, >>> {mbcs_block_size,178384}, >>> {mbcs_carriers_size,4718592}]}, >>> {{binary_alloc,2}, >>> [{sbcs_usage,2.0}, >>> {mbcs_usage,0.05326200786389803}, >>> {sbcs_block_size,0}, >>> {sbcs_carriers_size,0}, >>> {mbcs_block_size,4775112}, >>> {mbcs_carriers_size,89653248}]}, >>> {{binary_alloc,1}, >>> [{sbcs_usage,2.0}, >>> {mbcs_usage,0.0643930146188447}, >>> {sbcs_block_size,0}, >>> {sbcs_carriers_size,0}, >>> {mbcs_block_size,4456384}, >>> {mbcs_carriers_size,69206016}]}, >>> >>> > Yeah, those are very, very low usage values on mbcs (<5%). Can you give > information such as the average block size (also in recon_alloc), how long > the node has been running, and so on? > > All values for current and max are useful for these. Also if you could > provide your allocator strategy that could be useful. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bengt.kleberg@REDACTED Tue Dec 8 11:00:06 2015 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Tue, 8 Dec 2015 11:00:06 +0100 Subject: [erlang-questions] clarify: Message-ID: <5666AA26.3030500@ericsson.com> Greetings, How do I make this output stay on one line? According to io:columns/0 I have 160 characters, and the output is only 100. 3> io:fwrite( "Erlang memory usageErlang memory usage: ~p~n",[erlang:process_info(erlang:self(), [stack_size, heap_size, total_heap_size])] ). Erlang memory usageErlang memory usage: [{stack_size,44}, {heap_size,987}, {total_heap_size, 2585}] bengt From vladdu55@REDACTED Tue Dec 8 11:02:48 2015 From: vladdu55@REDACTED (Vlad Dumitrescu) Date: Tue, 8 Dec 2015 11:02:48 +0100 Subject: [erlang-questions] clarify: In-Reply-To: <5666AA26.3030500@ericsson.com> References: <5666AA26.3030500@ericsson.com> Message-ID: Hi Bengt, I suppose the easiest suggestion is to use ~w instead of ~p? regards, Vlad On Tue, Dec 8, 2015 at 11:00 AM, Bengt Kleberg wrote: > Greetings, > > How do I make this output stay on one line? According to io:columns/0 I > have 160 characters, and the output is only 100. > > 3> io:fwrite( "Erlang memory usageErlang memory usage: > ~p~n",[erlang:process_info(erlang:self(), [stack_size, heap_size, > total_heap_size])] ). > > Erlang memory usageErlang memory usage: [{stack_size,44}, > {heap_size,987}, > {total_heap_size, > 2585}] > > > bengt > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bengt.kleberg@REDACTED Tue Dec 8 11:03:51 2015 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Tue, 8 Dec 2015 11:03:51 +0100 Subject: [erlang-questions] clarify: In-Reply-To: References: <5666AA26.3030500@ericsson.com> Message-ID: <5666AB07.9020208@ericsson.com> Thank you, that solved it. On 12/08/2015 11:02 AM, Vlad Dumitrescu wrote: > Hi Bengt, > > I suppose the easiest suggestion is to use ~w instead of ~p? > > regards, > Vlad > > > On Tue, Dec 8, 2015 at 11:00 AM, Bengt Kleberg > > wrote: > > Greetings, > > How do I make this output stay on one line? According to > io:columns/0 I have 160 characters, and the output is only 100. > > 3> io:fwrite( "Erlang memory usageErlang memory usage: > ~p~n",[erlang:process_info(erlang:self(), [stack_size, heap_size, > total_heap_size])] ). > > Erlang memory usageErlang memory usage: [{stack_size,44}, > {heap_size,987}, > {total_heap_size, > 2585}] > > > bengt > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.valim@REDACTED Tue Dec 8 11:55:52 2015 From: jose.valim@REDACTED (=?UTF-8?Q?Jos=C3=A9_Valim?=) Date: Tue, 8 Dec 2015 11:55:52 +0100 Subject: [erlang-questions] Thank you for 17 years of Erlang/OTP In-Reply-To: References: Message-ID: Well said Vance! I have been using Erlang and OTP for almost 6 years and it is always a pleasure. Thanks to the OTP team for the amazing work! *Jos? Valim* www.plataformatec.com.br Skype: jv.ptec Founder and Director of R&D On Tue, Dec 8, 2015 at 5:16 AM, Vance Shipley wrote: > It was seventeen years ago today that Erlang/OTP was released as open > source. On this occasion I offer my heartfelt thanks to the OTP team for > their fantastic contribution and support to the community. > > > http://web.archive.org/web/19991009002753/www.erlang.se/onlinenews/ErlangOTPos.shtml > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From valentin@REDACTED Tue Dec 8 12:17:36 2015 From: valentin@REDACTED (Valentin Micic) Date: Tue, 8 Dec 2015 13:17:36 +0200 Subject: [erlang-questions] Thank you for 17 years of Erlang/OTP In-Reply-To: References: Message-ID: <18E579C2-2F8F-4205-85DE-637E29232890@pixie.co.za> Hear, hear! Erlang changed the way I think about programming, and indeed, transformed my career by leading me out of C/C++ caves. Thank you Ericsson for having a courage to release it to a general public, whilst still maintaining a meaningful level of control. Not everyone may agree with this, but, for what is worth, I think this has to be a sign of an exceptional company. V/ On 08 Dec 2015, at 12:55 PM, Jos? Valim wrote: > Well said Vance! I have been using Erlang and OTP for almost 6 years and it is always a pleasure. > > Thanks to the OTP team for the amazing work! > > > > Jos? Valim > www.plataformatec.com.br > Skype: jv.ptec > Founder and Director of R&D > > On Tue, Dec 8, 2015 at 5:16 AM, Vance Shipley wrote: > It was seventeen years ago today that Erlang/OTP was released as open source. On this occasion I offer my heartfelt thanks to the OTP team for their fantastic contribution and support to the community. > > http://web.archive.org/web/19991009002753/www.erlang.se/onlinenews/ErlangOTPos.shtml > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From bog495@REDACTED Tue Dec 8 12:18:11 2015 From: bog495@REDACTED (Bogdan Andu) Date: Tue, 8 Dec 2015 13:18:11 +0200 Subject: [erlang-questions] UDP socket - ealready ERROR Message-ID: Hi, I try to build a concurrent UDP server in Erlang, The socket is opened in a supervisor like this: init([]) -> %%xpp_config_lib:reload_config_file(), [{port, Port}] = ets:lookup(config, port), [{listen, IPv4}] = ets:lookup(config, listen), %% for worker [{ssl_recv_timeout, SslRecvTimeout}] = ets:lookup(config, ssl_recv_timeout), {ok, Sock} = gen_udp:open(Port, [binary, {active, false}, {reuseaddr, true}, {ip, IPv4}, {mode, binary} ]), MpsConn = {mps_conn_fsm,{mps_conn_fsm, start_link, [Sock, SslRecvTimeout, false]}, temporary, 5000, worker, [mps_conn_fsm]}, {ok, {{simple_one_for_one, 0, 1}, [MpsConn]}}. and in worker I have: init([Sock, SslRecvTimeout, Inet6]) -> process_flag(trap_exit, true), {ok, recv_pckt, #ssl_conn_state{lsock = Sock, ssl_recv_timeout = SslRecvTimeout, conn_pid = self(), inet6 = Inet6}, 0}. %% -------------------------------------------------------------------- %% Func: StateName/2 %% Returns: {next_state, NextStateName, NextStateData} | %% {next_state, NextStateName, NextStateData, Timeout} | %% {stop, Reason, NewStateData} %% -------------------------------------------------------------------- recv_pckt(Event, #ssl_conn_state{lsock = ListenSock, inet6 = Inet6, ssl_recv_timeout = SslRecvTimeout} = StateData) -> %% io:format("**epp_login~n", []), %% gen_udp:close(ListenSock), {ok, {Address, Port, Packet}} = gen_udp:recv(ListenSock, 0, SslRecvTimeout), io:format("~p~n", [Packet]), gen_udp:close(ListenSock), {stop, normal, StateData}. %% {next_state, recv_pckt, StateData, 0}. .. and in Erlang shell : Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:2:2] [async-threads:10] [kernel-poll:false] mps_dbg@REDACTED)1> (mps_dbg@REDACTED)1> mps_conn_sup:start_child(). {ok,<0.62.0>} (mps_dbg@REDACTED)2> mps_conn_sup:start_child(). {ok,<0.64.0>} =ERROR REPORT==== 8-Dec-2015::13:09:55 === [<0.64.0>]:[2]:TERMINATE:REASON={{badmatch,{error,ealready}}, [{mps_conn_fsm,recv_pckt,2, [{file, "/home/andu/remote/hp/home/andu/web/mps/src/mps_conn_fsm.erl"}, {line,80}]}, {gen_fsm,handle_msg,7, [{file,"gen_fsm.erl"},{line,518}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,239}]}]}:undefined (mps_dbg@REDACTED)3> =ERROR REPORT==== 8-Dec-2015::13:09:55 === ** State machine <0.64.0> terminating ** Last event in was timeout ** When State == recv_pckt ** Data == {ssl_conn_state,#Port<0.1632>,60000,undefined,undefined, undefined,[],[],<0.64.0>,undefined,undefined, undefined,"en",undefined,false,undefined, undefined,undefined,undefined,undefined, false,<<>>,<<>>,undefined,0} ** Reason for termination = ** {{badmatch,{error,ealready}}, [{mps_conn_fsm,recv_pckt,2, [{file,"/home/andu/remote/hp/home/andu/web/mps/src/mps_conn_fsm.erl"}, {line,80}]}, {gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,518}]}, {proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]} =CRASH REPORT==== 8-Dec-2015::13:09:55 === crasher: initial call: mps_conn_fsm:init/1 pid: <0.64.0> registered_name: [] exception exit: {{badmatch,{error,ealready}}, [{mps_conn_fsm,recv_pckt,2, [{file, "/home/andu/remote/hp/home/andu/web/mps/src/mps_conn_fsm.erl"}, {line,80}]}, {gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,518}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,239}]}]} in function gen_fsm:terminate/7 (gen_fsm.erl, line 626) ancestors: [mps_conn_sup,<0.60.0>,<0.56.0>] messages: [] links: [<0.61.0>] dictionary: [] trap_exit: true status: running heap_size: 610 stack_size: 27 reductions: 295 neighbours: =SUPERVISOR REPORT==== 8-Dec-2015::13:09:55 === Supervisor: {local,mps_conn_sup} Context: child_terminated Reason: {{badmatch,{error,ealready}}, [{mps_conn_fsm,recv_pckt,2, [{file, "/home/andu/remote/hp/home/andu/web/mps/src/mps_conn_fsm.erl"}, {line,80}]}, {gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,518}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,239}]}]} Offender: [{pid,<0.64.0>}, {id,mps_conn_fsm}, {mfargs,{mps_conn_fsm,start_link,undefined}}, {restart_type,temporary}, {shutdown,5000}, {child_type,worker}] When I try to start another concurrent worker I get ealready error. How can be this error avoided. The option to make socket active introduces a bottleneck as all messages are sent to controlling process. Is there any configuration parameter at os kernel level or via inet_dist to fix the error? OS : Linux localhost 4.0.4-301.fc22.x86_64 #1 SMP Thu May 21 13:10:33 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux Any help very much appreciated, Bogdan -------------- next part -------------- An HTML attachment was scrubbed... URL: From bog495@REDACTED Wed Dec 9 09:16:31 2015 From: bog495@REDACTED (Bogdan Andu) Date: Wed, 9 Dec 2015 10:16:31 +0200 Subject: [erlang-questions] UDP socket - ealready ERROR In-Reply-To: <5666F638.1010709@gandrade.net> References: <5666F638.1010709@gandrade.net> Message-ID: After more tests the basic questions that remains .. Is there a way to have more than one process be blocked in gen_udp:recv/2 call as this seems to not be possible right now. Sequentially works as expected, but when when I try to spawn another process that makes and attempt to execute gen_udp:recv/2 while the first process already does gen_udp:recv/2 , the second process gives elready error. This means that 2 process cannot concurrently do gen_udp:recv/2 . In scenario with socket {active, once} or {active, true} there is only one process that can receive the message from socket (the one that does gen_udp:open/2 ), and for multi-threaded applications this quickly can become a bottleneck. In this case, however, elready error disappears of course. . I tried both variants and both have disavantages. Is there an idiom for designing a udp concurrent server in Erlang? So far, it seems impossible. /Bogdan On Tue, Dec 8, 2015 at 5:24 PM, Guilherme Andrade wrote: > Hi, > > On 08/12/15 11:18, Bogdan Andu wrote: > > [...] > > MpsConn = {mps_conn_fsm,{mps_conn_fsm, start_link, [Sock, > SslRecvTimeout, false]}, > temporary, 5000, worker, [mps_conn_fsm]}, > > {ok, {{simple_one_for_one, 0, 1}, [MpsConn]}}. > > [...] > > > mps_dbg@REDACTED)1> > (mps_dbg@REDACTED)1> mps_conn_sup:start_child(). > {ok,<0.62.0>} > (mps_dbg@REDACTED)2> mps_conn_sup:start_child(). > {ok,<0.64.0>} > > > Here is the culprit: you're binding the socket only *once* in the > supervisor[1], which will be its controlling process, and then launching > two workers which will try both to read from the same socket (which they > cannot do because they don't control it) and then close it (which, if > execution were to even reach that point, wouldn't also be what I imagine > you most likely intend because you would end up closing the same socket > twice.) > > One solution is to remove the socket from the child spec and move the > socket binding to the worker code. > > In any case, if it were me, I would first try to have a single binding > process which receives the datagrams and launches workers, and avoid > overload by combining the {active, N}[2] flow with whichever backpressure > metric your system would signal; there's no particular advantage to having > multiple bindings over the same port - you won't really end up processing > multiple flows at once as if they were multiple gen_tcp accepted sockets. > > On a final note, I would also advise against executing potentially > blocking operations on OTP processes' init/1 and make it asynchronous > instead (e.g. by casting an initialisation request to itself) or you'll > risk choking up the supervision tree. > > > [1]: http://www.erlang.org/doc/man/supervisor.html#id242517 > [2]: http://www.erlang.org/doc/man/inet.html#setopts-2 > > -- > Guilherme > http://www.gandrade.net/ > PGP: 0x602B2AD8 / B348 C976 CCE1 A02A 017E 4649 7A6E B621 602B 2AD8 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bog495@REDACTED Wed Dec 9 10:39:27 2015 From: bog495@REDACTED (Bogdan Andu) Date: Wed, 9 Dec 2015 11:39:27 +0200 Subject: [erlang-questions] UDP concurrent server Message-ID: following the thread https://groups.google.com/forum/?hl=en#!topic/erlang-programming/6Q3cLtJdwIU as it seems that POSt to topic does not work After more tests the basic questions that remains .. Is there a way to have more than one process be blocked in gen_udp:recv/2 call as this seems to not be possible, probably because the way udp sockets work. Sequentially works as expected, but when when I try to spawn another process that makes and attempt to execute gen_udp:recv/2 while the first process already does gen_udp:recv/2 , the second process gives elready error. This means that 2 process cannot concurrently do gen_udp:recv/2 . In scenario with socket {active, once} or {active, true} there is only one process that can receive the message from socket (the one that does gen_udp:open/2 ), and for multi-threaded applications this quickly can become a bottleneck. In this case, however, elready error disappears of course. . I tried both variants and both have disavantages. Is there an idiom for designing a udp concurrent server in Erlang? So far, it seems impossible. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bog495@REDACTED Wed Dec 9 10:14:48 2015 From: bog495@REDACTED (Bogdan Andu) Date: Wed, 9 Dec 2015 11:14:48 +0200 Subject: [erlang-questions] UDP concurrent server Message-ID: following the thread https://groups.google.com/forum/?hl=en#!topic/erlang-programming/6Q3cLtJdwIU as it seems that POSt to topic does not work After more tests the basic questions that remains .. Is there a way to have more than one process be blocked in gen_udp:recv/2 call as this seems to not be possible, probably because the way udp sockets work. Sequentially works as expected, but when when I try to spawn another process that makes and attempt to execute gen_udp:recv/2 while the first process already does gen_udp:recv/2 , the second process gives elready error. This means that 2 process cannot concurrently do gen_udp:recv/2 . In scenario with socket {active, once} or {active, true} there is only one process that can receive the message from socket (the one that does gen_udp:open/2 ), and for multi-threaded applications this quickly can become a bottleneck. In this case, however, elready error disappears of course. . I tried both variants and both have disavantages. Is there an idiom for designing a udp concurrent server in Erlang? So far, it seems impossible. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bog495@REDACTED Wed Dec 9 09:21:32 2015 From: bog495@REDACTED (Bogdan Andu) Date: Wed, 9 Dec 2015 10:21:32 +0200 Subject: [erlang-questions] UDP socket - ealready ERROR In-Reply-To: References: Message-ID: After more tests the basic questions that remains .. Is there a way to have more than one process be blocked in gen_udp:recv/2 call as this seems to not be possible, probably because the way udp sockets work. Sequentially works as expected, but when when I try to spawn another process that makes and attempt to execute gen_udp:recv/2 while the first process already does gen_udp:recv/2 , the second process gives elready error. This means that 2 process cannot concurrently do gen_udp:recv/2 . In scenario with socket {active, once} or {active, true} there is only one process that can receive the message from socket (the one that does gen_udp:open/2 ), and for multi-threaded applications this quickly can become a bottleneck. In this case, however, elready error disappears of course. . I tried both variants and both have disavantages. Is there an idiom for designing a udp concurrent server in Erlang? So far, it seems impossible. On Tue, Dec 8, 2015 at 1:18 PM, Bogdan Andu wrote: > Hi, > > I try to build a concurrent UDP server in Erlang, > > The socket is opened in a supervisor like this: > init([]) -> > %%xpp_config_lib:reload_config_file(), > [{port, Port}] = ets:lookup(config, port), > [{listen, IPv4}] = ets:lookup(config, listen), > > %% for worker > [{ssl_recv_timeout, SslRecvTimeout}] = ets:lookup(config, > ssl_recv_timeout), > > > {ok, Sock} = gen_udp:open(Port, [binary, > {active, false}, > {reuseaddr, true}, > {ip, IPv4}, > {mode, binary} > ]), > > > MpsConn = {mps_conn_fsm,{mps_conn_fsm, start_link, [Sock, > SslRecvTimeout, false]}, > temporary, 5000, worker, [mps_conn_fsm]}, > > {ok, {{simple_one_for_one, 0, 1}, [MpsConn]}}. > > and in worker I have: > > > init([Sock, SslRecvTimeout, Inet6]) -> > process_flag(trap_exit, true), > > {ok, recv_pckt, #ssl_conn_state{lsock = Sock, ssl_recv_timeout = > SslRecvTimeout, > conn_pid = self(), inet6 = Inet6}, 0}. > > %% -------------------------------------------------------------------- > %% Func: StateName/2 > %% Returns: {next_state, NextStateName, NextStateData} | > %% {next_state, NextStateName, NextStateData, Timeout} | > %% {stop, Reason, NewStateData} > %% -------------------------------------------------------------------- > recv_pckt(Event, #ssl_conn_state{lsock = ListenSock, inet6 = Inet6, > ssl_recv_timeout = SslRecvTimeout} = > StateData) -> > %% io:format("**epp_login~n", []), > %% gen_udp:close(ListenSock), > {ok, {Address, Port, Packet}} = gen_udp:recv(ListenSock, 0, > SslRecvTimeout), > io:format("~p~n", [Packet]), > gen_udp:close(ListenSock), > {stop, normal, StateData}. > %% {next_state, recv_pckt, StateData, 0}. > > .. and in Erlang shell : > Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:2:2] [async-threads:10] > [kernel-poll:false] > > mps_dbg@REDACTED)1> > (mps_dbg@REDACTED)1> mps_conn_sup:start_child(). > {ok,<0.62.0>} > (mps_dbg@REDACTED)2> mps_conn_sup:start_child(). > {ok,<0.64.0>} > > =ERROR REPORT==== 8-Dec-2015::13:09:55 === > [<0.64.0>]:[2]:TERMINATE:REASON={{badmatch,{error,ealready}}, > [{mps_conn_fsm,recv_pckt,2, > [{file, > > "/home/andu/remote/hp/home/andu/web/mps/src/mps_conn_fsm.erl"}, > {line,80}]}, > {gen_fsm,handle_msg,7, > [{file,"gen_fsm.erl"},{line,518}]}, > {proc_lib,init_p_do_apply,3, > > [{file,"proc_lib.erl"},{line,239}]}]}:undefined > (mps_dbg@REDACTED)3> > =ERROR REPORT==== 8-Dec-2015::13:09:55 === > ** State machine <0.64.0> terminating > ** Last event in was timeout > ** When State == recv_pckt > ** Data == {ssl_conn_state,#Port<0.1632>,60000,undefined,undefined, > > undefined,[],[],<0.64.0>,undefined,undefined, > undefined,"en",undefined,false,undefined, > undefined,undefined,undefined,undefined, > false,<<>>,<<>>,undefined,0} > ** Reason for termination = > ** {{badmatch,{error,ealready}}, > [{mps_conn_fsm,recv_pckt,2, > > [{file,"/home/andu/remote/hp/home/andu/web/mps/src/mps_conn_fsm.erl"}, > {line,80}]}, > {gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,518}]}, > {proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]} > > =CRASH REPORT==== 8-Dec-2015::13:09:55 === > crasher: > initial call: mps_conn_fsm:init/1 > pid: <0.64.0> > registered_name: [] > exception exit: {{badmatch,{error,ealready}}, > [{mps_conn_fsm,recv_pckt,2, > [{file, > > "/home/andu/remote/hp/home/andu/web/mps/src/mps_conn_fsm.erl"}, > {line,80}]}, > > {gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,518}]}, > {proc_lib,init_p_do_apply,3, > [{file,"proc_lib.erl"},{line,239}]}]} > in function gen_fsm:terminate/7 (gen_fsm.erl, line 626) > ancestors: [mps_conn_sup,<0.60.0>,<0.56.0>] > messages: [] > links: [<0.61.0>] > dictionary: [] > trap_exit: true > status: running > heap_size: 610 > stack_size: 27 > reductions: 295 > neighbours: > > =SUPERVISOR REPORT==== 8-Dec-2015::13:09:55 === > Supervisor: {local,mps_conn_sup} > Context: child_terminated > Reason: {{badmatch,{error,ealready}}, > [{mps_conn_fsm,recv_pckt,2, > [{file, > > "/home/andu/remote/hp/home/andu/web/mps/src/mps_conn_fsm.erl"}, > {line,80}]}, > > {gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,518}]}, > {proc_lib,init_p_do_apply,3, > [{file,"proc_lib.erl"},{line,239}]}]} > Offender: [{pid,<0.64.0>}, > {id,mps_conn_fsm}, > {mfargs,{mps_conn_fsm,start_link,undefined}}, > {restart_type,temporary}, > {shutdown,5000}, > {child_type,worker}] > > > When I try to start another concurrent worker I get ealready error. > > How can be this error avoided. > > The option to make socket active introduces a bottleneck as all messages > are sent to > controlling process. > > Is there any configuration parameter at os kernel level or via inet_dist > to fix the error? > > OS : > Linux localhost 4.0.4-301.fc22.x86_64 #1 SMP Thu May 21 13:10:33 UTC 2015 > x86_64 x86_64 x86_64 GNU/Linux > > Any help very much appreciated, > > Bogdan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bog495@REDACTED Wed Dec 9 09:29:07 2015 From: bog495@REDACTED (Bogdan Andu) Date: Wed, 9 Dec 2015 00:29:07 -0800 (PST) Subject: [erlang-questions] UDP concurrent server Message-ID: Hi, I want to build a concurrent udp server in Erlang, but it seems to not be possible. After more tests and investigation the questions that remains .. Is there a way to have more than one process be blocked in gen_udp:recv/2 call as this seems to not be possible, probably because the way udp sockets work. Sequentially works as expected, but when when I try to spawn another process that makes and attempt to execute gen_udp:recv/2 while the first process already does gen_udp:recv/2 , the second process gives elready error. This means that 2 process cannot concurrently do gen_udp:recv/2 . In scenario with socket {active, once} or {active, true} there is only one process that can receive the message from socket (the one that does gen_udp:open/2 ), and for multi-threaded applications this quickly can become a bottleneck. In this case, however, elready error disappears of course. . I tried both variants and both have disavantages. Is there an idiom for designing a udp concurrent server in Erlang? So far, it seems impossible. /Bogdan -------------- next part -------------- An HTML attachment was scrubbed... URL: From bog495@REDACTED Wed Dec 9 09:36:26 2015 From: bog495@REDACTED (Bogdan Andu) Date: Wed, 9 Dec 2015 00:36:26 -0800 (PST) Subject: [erlang-questions] state machine question In-Reply-To: <7a5dd879-8abb-4838-ac1b-4dccca79ef89@googlegroups.com> References: <7a5dd879-8abb-4838-ac1b-4dccca79ef89@googlegroups.com> Message-ID: Hi, You can use a counter in internal state. -record(state, { counter=0 }). Every time you receive a message counter++ Erlang pseudocode: loop(#state{counter = Counter} = State) -> receive 1 -> if Counter == 0 -> call b; true -> call c end, loop(State#state{counter = Counter + 1}) end end Bogdan On Thursday, July 30, 2015 at 11:36:37 AM UTC+3, Ravindra M wrote: > > Hi All, > I am new to Erlang(been programming in python and C). > > I have trouble in implementing following program, > Lets say I have, > 1) function a which receives message in infinite loop. > 2) function a should call function b when first message is received. > 3) call function c when subsequent messages called. > > Lets assume there is no difference between message, example your always > receive '1'. > > So first '1', will call function b and all subsequent '1' call function c > executed. > > Thanks, > Ravi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bog495@REDACTED Wed Dec 9 09:48:03 2015 From: bog495@REDACTED (Bogdan Andu) Date: Wed, 9 Dec 2015 00:48:03 -0800 (PST) Subject: [erlang-questions] Binary leak on VM? How to resolve this issue? In-Reply-To: References: Message-ID: <13e1b02f-34c4-4c41-a756-d0af38fb251f@googlegroups.com> Hi, Most likely you have temporary processes that handles binaries that never die holding those binaries referenced and gc cannnot garbage collect them. This is a sign o bad process cleanup management logic. On Tuesday, November 10, 2015 at 4:59:27 AM UTC+2, ??? wrote: > > Hello All :) > > > I have a some question about Erlang binary memory management. > > I'm running the application on Erlang VM 17.4 > (Erlang/OTP 17 [erts-6.3] [source-f9282c6] [64-bit] [smp:24:24] > [async-threads:10] [hipe] [kernel-poll:false]) > > This application is processing many binaries. (relaying some binary > packets from A client to B client) > > But A weeks ago, binary metric in system does not drop on base line (like > below picture). > Base line of binary memory is about 144 MB but only increased :( > > > > > > So I ran some codes in machine's console > > (xxx@REDACTED)1> lists:sum([try > (xxx@REDACTED)1> {_,M} = erlang:process_info(Pid, binary), > (xxx@REDACTED)1> lists:sum([ByteSize || {_, ByteSize, _} <- M]) > (xxx@REDACTED)1> catch > (xxx@REDACTED)1> _:_ -> 0 > (xxx@REDACTED)1> end || Pid <- processes()]). > 151431354 > (xxx@REDACTED)2> CS = lists:foldl( > (xxx@REDACTED)2> fun ({instance, _, L}, Acc) -> > (xxx@REDACTED)2> {value,{_,MBCS}} = lists:keysearch(mbcs, 1, L), > (xxx@REDACTED)2> {value,{_,SBCS}} = lists:keysearch(sbcs, 1, L), > (xxx@REDACTED)2> [MBCS,SBCS | Acc] > (xxx@REDACTED)2> end, > (xxx@REDACTED)2> [], > (xxx@REDACTED)2> erlang:system_info({allocator_sizes, binary_alloc})), > (xxx@REDACTED)2> lists:foldl( > (xxx@REDACTED)2> fun(L, Sz0) -> > (xxx@REDACTED)2> {value,{_,Sz,_,_}} = lists:keysearch(blocks_size, 1, L), > (xxx@REDACTED)2> Sz0+Sz > (xxx@REDACTED)2> end, 0, CS). > 553821984 > > Sum of binaries using processes is 144MB, but some of binary allocator is > 528MB. > This is almost same with below binary memory. > > (xxx@REDACTED)3> erlang:memory(). > [{total,2314432736}, > {processes,1672464336}, > {processes_used,1672334920}, > {system,641968400}, > {atom,2017513}, > {atom_used,2001044}, > {binary,553927480}, <-- > {code,54190392}, > {ets,7211864}] > > Maybe GC problem? so I ran GC forcibly. > > (xxx@REDACTED)4> [erlang:garbage_collect(P) || P <- processes()]. > [true,true,true,true,true,true,true,true,true,true,true, > true,true,true,true,true,true,true,true,true,true,true,true, > true,true,true,true,true,true|...] > > > (xxx@REDACTED)5> erlang:memory(). > [{total,2294793888}, > {processes,1654106832}, > {processes_used,1653947192}, > {system,640687056}, > {atom,2017513}, > {atom_used,2001044}, > {binary,552605080}, <-- > {code,54190392}, > {ets,7240816}] > > Not resolved... :( > > Moreover this issue is only in production environment, no issue in develop > env. I can't reproduce it. > About 200 machines in production, but about 110 machines has this issue. > > How can I approach this issue to resolve? > > Ah! my sys.config is like this. > === > +K true > +A 6 > +stbt db > +SP 50:50 > +P 134217727 > -env ERL_FULLSWEEP_AFTER 0 > -heart > === > > Thanks in advance for your advice, > > GiTack, Lee > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nem@REDACTED Wed Dec 9 00:57:06 2015 From: nem@REDACTED (Geoff Cant) Date: Tue, 8 Dec 2015 15:57:06 -0800 Subject: [erlang-questions] Idea for deprecating EPMD Message-ID: Hi all, I find EPMD to be a regular frustration when deploying and operating Erlang systems. EPMD is a separate service that needs to be running for Erlang distribution to work properly, and usually (in systems that don?t use distribution for their main function) it's not set up right, and you only notice in production because the only time you use for distribution is to get a remote shell (over localhost). (Maybe I?m just bad at doing this, but I do it a lot) Erlang node names already encode host information ? ?descriptive_name@REDACTED?. If we include the erlang distribution listen port too, that would remove the need for EPMD. For example: ?descriptive_name@REDACTED:distribution_port?. Node names using this scheme would skip the EPMD step, otherwise erlang distribution would fall back to the current system. My questions for the list are: * Are you annoyed by epmd too? * Do you think this idea is worth me writing up into an EEP or writing a pull request? * Do you think this idea is unworkable for some reason I?m overlooking? Thanks, -Geoff From z@REDACTED Tue Dec 8 13:15:57 2015 From: z@REDACTED (Danil Zagoskin) Date: Tue, 8 Dec 2015 15:15:57 +0300 Subject: [erlang-questions] ssl_session_cache: trouble + questions Message-ID: Hi! Recently our servers started to consume lots of SYS CPU. Inside a VM top processes (by reductions per second) were ssl session validators. Most popular current function in runnable processes was calendar:datetime_to_gregorian_seconds/2. Also gproc was very slow (it uses ETS). According to `ets:i().` the largest ETS table was: 49178 server_ssl_otp_session_cache ordered_set 5015080 305919839 ssl_manager We have worked around the problem by using lower session_lifetime. But reading the code I came to these questions: - The cache is `ordered_set` type which has logarithmic access time. Does it have to be `ordered_set`, not just `set` (with constant access time)? - There is no protection agains running multiple validators. This leads to many processes accessing single table and doing the same work. This seems to greatly increase SYS CPU usage and slowdown in other ETS tables. Should we skip new validator start if previous one is still running? - ssl_session:valid_session is called for every session in cache and calls `calendar:datetime_to_gregorian_seconds({date(), time()})` itself. Should we use `erlang:monotonic_time(seconds)` everywhere instead? Or maybe we should pre-calculate minimal allowed timestamp to avoid extra arithmetics? -- Danil Zagoskin | z@REDACTED -------------- next part -------------- An HTML attachment was scrubbed... URL: From g@REDACTED Tue Dec 8 16:24:40 2015 From: g@REDACTED (Guilherme Andrade) Date: Tue, 8 Dec 2015 15:24:40 +0000 Subject: [erlang-questions] UDP socket - ealready ERROR In-Reply-To: References: Message-ID: <5666F638.1010709@gandrade.net> Hi, On 08/12/15 11:18, Bogdan Andu wrote: > [...] > > MpsConn = {mps_conn_fsm,{mps_conn_fsm, start_link, [Sock, > SslRecvTimeout, false]}, > temporary, 5000, worker, [mps_conn_fsm]}, > > {ok, {{simple_one_for_one, 0, 1}, [MpsConn]}}. [...] > > mps_dbg@REDACTED )1> > (mps_dbg@REDACTED )1> > mps_conn_sup:start_child(). > {ok,<0.62.0>} > (mps_dbg@REDACTED )2> > mps_conn_sup:start_child(). > {ok,<0.64.0>} Here is the culprit: you're binding the socket only *once* in the supervisor[1], which will be its controlling process, and then launching two workers which will try both to read from the same socket (which they cannot do because they don't control it) and then close it (which, if execution were to even reach that point, wouldn't also be what I imagine you most likely intend because you would end up closing the same socket twice.) One solution is to remove the socket from the child spec and move the socket binding to the worker code. In any case, if it were me, I would first try to have a single binding process which receives the datagrams and launches workers, and avoid overload by combining the {active, N}[2] flow with whichever backpressure metric your system would signal; there's no particular advantage to having multiple bindings over the same port - you won't really end up processing multiple flows at once as if they were multiple gen_tcp accepted sockets. On a final note, I would also advise against executing potentially blocking operations on OTP processes' init/1 and make it asynchronous instead (e.g. by casting an initialisation request to itself) or you'll risk choking up the supervision tree. [1]: http://www.erlang.org/doc/man/supervisor.html#id242517 [2]: http://www.erlang.org/doc/man/inet.html#setopts-2 -- Guilherme http://www.gandrade.net/ PGP: 0x602B2AD8 / B348 C976 CCE1 A02A 017E 4649 7A6E B621 602B 2AD8 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From bengt.e.johansson@REDACTED Wed Dec 9 11:35:51 2015 From: bengt.e.johansson@REDACTED (Bengt Johansson E) Date: Wed, 9 Dec 2015 10:35:51 +0000 Subject: [erlang-questions] UDP concurrent server In-Reply-To: References: Message-ID: Hi Bogdan! Your questions make me a bit confused. I wonder *why* you want two processes waiting for packets from the same socket and *what* you expect to happen? Generally one usually only speaks about concurrent servers regarding TCP where you bind to a socket waiting for incoming connections and once a connection is established, you spawn a process (or thread depending on language) and process the data coming over the connection concurrently. Note that the main loop of the server that waits for responses is always sequential. The new process handling the connection gets a new socket with a free port used only for that particular connection. But! UDP lacks support for connections mainly since it is a message based protocol and hence is devoid of any connection abstraction ? Still the question remains what is a concurrent UDP server? I guess what you want to achieve is some kind of distribution of incoming packets to several processes to handle them. In that case you should either go for the solution of TCP to set up new socket for each communication or write a simple process that classifies the incoming packets and distributes them to the correct process based on some information in the packet, fi. Some identifier you have chosen to identify the connection. Basically you have to implement an application specific load balancer ? or rather load distributor ? hoping that the time taken to actually process the packets greatly overshadows the time it takes to distribute them. ? Anyway, there is no way the underlying UDP stack and/or erts can make the decision for you. To which process to send the packet that is. Hope that helps! BR, BengtJ From: erlang-questions-bounces@REDACTED [mailto:erlang-questions-bounces@REDACTED] On Behalf Of Bogdan Andu Sent: den 9 december 2015 10:15 To: Erlang Subject: [erlang-questions] UDP concurrent server following the thread https://groups.google.com/forum/?hl=en#!topic/erlang-programming/6Q3cLtJdwIU as it seems that POSt to topic does not work After more tests the basic questions that remains .. Is there a way to have more than one process be blocked in gen_udp:recv/2 call as this seems to not be possible, probably because the way udp sockets work. Sequentially works as expected, but when when I try to spawn another process that makes and attempt to execute gen_udp:recv/2 while the first process already does gen_udp:recv/2 , the second process gives elready error. This means that 2 process cannot concurrently do gen_udp:recv/2 . In scenario with socket {active, once} or {active, true} there is only one process that can receive the message from socket (the one that does gen_udp:open/2 ), and for multi-threaded applications this quickly can become a bottleneck. In this case, however, elready error disappears of course. . I tried both variants and both have disavantages. Is there an idiom for designing a udp concurrent server in Erlang? So far, it seems impossible. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladdu55@REDACTED Wed Dec 9 11:36:16 2015 From: vladdu55@REDACTED (Vlad Dumitrescu) Date: Wed, 9 Dec 2015 11:36:16 +0100 Subject: [erlang-questions] Idea for deprecating EPMD In-Reply-To: References: Message-ID: Hi Geoff, How would you know which port where each erlang node listens on? With epmd, the node publishes the port to the daemon and the peers need not know it. It feels to me that a central registry is still needed, or each node would have to run its own copy somehow. The latter might work relatively easy for regular nodes, but we also have C and Java nodes... regards, Vlad On Wed, Dec 9, 2015 at 12:57 AM, Geoff Cant wrote: > Hi all, I find EPMD to be a regular frustration when deploying and > operating Erlang systems. EPMD is a separate service that needs to be > running for Erlang distribution to work properly, and usually (in systems > that don?t use distribution for their main function) it's not set up right, > and you only notice in production because the only time you use for > distribution is to get a remote shell (over localhost). (Maybe I?m just bad > at doing this, but I do it a lot) > > Erlang node names already encode host information ? > ?descriptive_name@REDACTED?. If we include the erlang distribution listen > port too, that would remove the need for EPMD. For example: > ?descriptive_name@REDACTED:distribution_port?. Node names using this > scheme would skip the EPMD step, otherwise erlang distribution would fall > back to the current system. > > > My questions for the list are: > * Are you annoyed by epmd too? > * Do you think this idea is worth me writing up into an EEP or writing a > pull request? > * Do you think this idea is unworkable for some reason I?m overlooking? > > Thanks, > -Geoff > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Wed Dec 9 11:42:16 2015 From: zxq9@REDACTED (zxq9) Date: Wed, 09 Dec 2015 19:42:16 +0900 Subject: [erlang-questions] Idea for deprecating EPMD In-Reply-To: References: Message-ID: <8405340.G1NMeAivzG@changa> On 2015?12?8? ??? 15:57:06 Geoff Cant wrote: > Erlang node names already encode host information ? > ?descriptive_name@REDACTED?. If we include the erlang distribution > listen port too, that would remove the need for EPMD. For example: > ?descriptive_name@REDACTED:distribution_port?. Node names using this > scheme would skip the EPMD step, otherwise erlang distribution would > fall back to the current system. > > > My questions for the list are: > * Are you annoyed by epmd too? > * Do you think this idea is worth me writing up into an EEP or writing > a pull request? > * Do you think this idea is unworkable for some reason I?m overlooking? I'm guilty of not really considering an alternative (so many other things I deal with more often). But yeah, now that I think of it, that is sort of annoying. This is a nice idea, and might even be nicer if it were taken a step further and represented Erlang nodes according to a for-real URI scheme: epmd://name@REDACTED[:port] or something along those lines. There is probably some reason the example above is silly, and maybe "epmd" isn't the best name for the protocol, but you get the idea. I think normalizing that in the way so many other services are normalized would be a big win, and would also allow implementations of "compatible nodes" that weren't ad-hoc libraries. -Craig From bchesneau@REDACTED Wed Dec 9 11:42:50 2015 From: bchesneau@REDACTED (Benoit Chesneau) Date: Wed, 09 Dec 2015 10:42:50 +0000 Subject: [erlang-questions] Idea for deprecating EPMD In-Reply-To: References: Message-ID: On Wed, Dec 9, 2015 at 11:36 AM Vlad Dumitrescu wrote: > Hi Geoff, > > How would you know which port where each erlang node listens on? With > epmd, the node publishes the port to the daemon and the peers need not know > it. It feels to me that a central registry is still needed, or each node > would have to run its own copy somehow. The latter might work relatively > easy for regular nodes, but we also have C and Java nodes... > > regards, > Vlad > > Exposing the rpc port could be done using any GOSSIP method, TCP/UDP multicast or broadcast, mdns, ... Imo making EPMD optional is a good idea. It would also allows to improve out-of-band messaging using tools like gen_rpc. - benoit > > On Wed, Dec 9, 2015 at 12:57 AM, Geoff Cant wrote: > >> Hi all, I find EPMD to be a regular frustration when deploying and >> operating Erlang systems. EPMD is a separate service that needs to be >> running for Erlang distribution to work properly, and usually (in systems >> that don?t use distribution for their main function) it's not set up right, >> and you only notice in production because the only time you use for >> distribution is to get a remote shell (over localhost). (Maybe I?m just bad >> at doing this, but I do it a lot) >> >> Erlang node names already encode host information ? >> ?descriptive_name@REDACTED?. If we include the erlang distribution >> listen port too, that would remove the need for EPMD. For example: >> ?descriptive_name@REDACTED:distribution_port?. Node names using this >> scheme would skip the EPMD step, otherwise erlang distribution would fall >> back to the current system. >> >> >> My questions for the list are: >> * Are you annoyed by epmd too? >> * Do you think this idea is worth me writing up into an EEP or writing a >> pull request? >> * Do you think this idea is unworkable for some reason I?m overlooking? >> >> Thanks, >> -Geoff >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladdu55@REDACTED Wed Dec 9 11:46:20 2015 From: vladdu55@REDACTED (Vlad Dumitrescu) Date: Wed, 9 Dec 2015 11:46:20 +0100 Subject: [erlang-questions] Idea for deprecating EPMD In-Reply-To: References: Message-ID: Ok, can these other protocols handle the cookie authentication? I'm not very familiar with them. And let's not forget that "old" nodes need to work with this. For the record, I'm not a fan of epmd either, but it might be tricky to replace. regards, Vlad On Wed, Dec 9, 2015 at 11:42 AM, Benoit Chesneau wrote: > > > On Wed, Dec 9, 2015 at 11:36 AM Vlad Dumitrescu > wrote: > >> Hi Geoff, >> >> How would you know which port where each erlang node listens on? With >> epmd, the node publishes the port to the daemon and the peers need not know >> it. It feels to me that a central registry is still needed, or each node >> would have to run its own copy somehow. The latter might work relatively >> easy for regular nodes, but we also have C and Java nodes... >> >> regards, >> Vlad >> >> > Exposing the rpc port could be done using any GOSSIP method, TCP/UDP > multicast or broadcast, mdns, ... > > Imo making EPMD optional is a good idea. It would also allows to improve > out-of-band messaging using tools like gen_rpc. > > - benoit > > >> >> On Wed, Dec 9, 2015 at 12:57 AM, Geoff Cant wrote: >> >>> Hi all, I find EPMD to be a regular frustration when deploying and >>> operating Erlang systems. EPMD is a separate service that needs to be >>> running for Erlang distribution to work properly, and usually (in systems >>> that don?t use distribution for their main function) it's not set up right, >>> and you only notice in production because the only time you use for >>> distribution is to get a remote shell (over localhost). (Maybe I?m just bad >>> at doing this, but I do it a lot) >>> >>> Erlang node names already encode host information ? >>> ?descriptive_name@REDACTED?. If we include the erlang distribution >>> listen port too, that would remove the need for EPMD. For example: >>> ?descriptive_name@REDACTED:distribution_port?. Node names using this >>> scheme would skip the EPMD step, otherwise erlang distribution would fall >>> back to the current system. >>> >>> >>> My questions for the list are: >>> * Are you annoyed by epmd too? >>> * Do you think this idea is worth me writing up into an EEP or writing a >>> pull request? >>> * Do you think this idea is unworkable for some reason I?m overlooking? >>> >>> Thanks, >>> -Geoff >>> _______________________________________________ >>> erlang-questions mailing list >>> erlang-questions@REDACTED >>> http://erlang.org/mailman/listinfo/erlang-questions >>> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bog495@REDACTED Wed Dec 9 10:03:41 2015 From: bog495@REDACTED (Bogdan Andu) Date: Wed, 9 Dec 2015 01:03:41 -0800 (PST) Subject: [erlang-questions] xmerl utf-8 question In-Reply-To: <816b5722-e07a-48f4-b83a-c041035530f3@googlegroups.com> References: <816b5722-e07a-48f4-b83a-c041035530f3@googlegroups.com> Message-ID: <1440f96f-4e71-4851-a3f5-226cdcdda6a0@googlegroups.com> That should be the unicode code point in hexadecimal, but take a look here: http://unicodelookup.com and input ? On Sunday, March 22, 2015 at 12:09:46 AM UTC+2, Charles Blair wrote: > > -include("/usr/local/lib/erlang/lib/xmerl-1.3.7/include/xmerl.hrl"). > > -import(xmerl_xs, [xslapply/2, value_of/1, select/2, -built_in_rules/2]). > > -import(xmerl_lib, [find_attribute/2, markup/2, markup/3]). > > Using the above, T?rkei is output as Tu\x{308}rkei (seven characters > beginning with "\", counted in vim, so it's not a display issue). > > I've tried to isolate the problem in various different ways and failed. > Has anyone else run into this? Were you able to address it, and if so, how? > I've run the XML input through an XSL stylesheet without issue. The issue > arises when translating the stylesheet to xmerl and running the same XML > through it. > > Thanks. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Wed Dec 9 11:54:55 2015 From: zxq9@REDACTED (zxq9) Date: Wed, 09 Dec 2015 19:54:55 +0900 Subject: [erlang-questions] Idea for deprecating EPMD In-Reply-To: References: Message-ID: <1610142.bQ5bYuTpAD@changa> On 2015?12?9? ??? 11:46:20 Vlad Dumitrescu wrote: > Ok, can these other protocols handle the cookie authentication? I'm not > very familiar with them. And let's not forget that "old" nodes need to work > with this. > > For the record, I'm not a fan of epmd either, but it might be tricky to > replace. Not "might", it will be a significant amount of work. Which means it won't happen any time soon. otoh, having an EEP about it will at least be a way to open the issue up and start checking for obvious issues and document that this is something that should eventually happen. -Craig From bog495@REDACTED Wed Dec 9 11:59:38 2015 From: bog495@REDACTED (Bogdan Andu) Date: Wed, 9 Dec 2015 12:59:38 +0200 Subject: [erlang-questions] UDP concurrent server In-Reply-To: References: Message-ID: Hi Bengt, Yes I am aware that udp sockets are working different than tcp sockets, but I was wondering if there is an idiom for udp as it is for tcp to make things concurrent. Having in mind the limitations of udp it seems that the best option is to implement the version with a controller process that receives udp packets and in spawning processes to actually handle that packet based some info, and immediately fetch the next message from message box. My only concern here is the bottleneck. The message box can easily be overloaded and the response of the server exponentially delayed. Is there any best practices to handle such situations avoiding message box overloading while handling say 1 million of concurrent (udp) connections ? Thank you, /Bogdan On Wed, Dec 9, 2015 at 12:35 PM, Bengt Johansson E < bengt.e.johansson@REDACTED> wrote: > Hi Bogdan! > > > > Your questions make me a bit confused. I wonder **why** you want two > processes waiting for packets from the same socket and **what** you > expect to happen? > > > > Generally one usually only speaks about concurrent servers regarding TCP > where you bind to a socket waiting for incoming connections and once a > connection is established, you spawn a process (or thread depending on > language) and process the data coming over the connection concurrently. > > Note that the main loop of the server that waits for responses is always > sequential. The new process handling the connection gets a new socket with > a free port used only for that particular connection. > > > > But! UDP lacks support for connections mainly since it is a message based > protocol and hence is devoid of any connection abstraction J > > > > Still the question remains what is a concurrent UDP server? I guess what > you want to achieve is some kind of distribution of incoming packets to > several processes to handle them. In that case you should either go for the > solution of TCP to set up new socket for each communication or write a > simple process that classifies the incoming packets and distributes them to > the correct process based on some information in the packet, fi. Some > identifier you have chosen to identify the connection. Basically you have > to implement an application specific load balancer ? or rather load > distributor ? hoping that the time taken to actually process the packets > greatly overshadows the time it takes to distribute them. J > > > > Anyway, there is no way the underlying UDP stack and/or erts can make the > decision for you. To which process to send the packet that is. > > > > Hope that helps! > > BR, BengtJ > > > > > > > > > > > > > > *From:* erlang-questions-bounces@REDACTED [mailto: > erlang-questions-bounces@REDACTED] *On Behalf Of *Bogdan Andu > *Sent:* den 9 december 2015 10:15 > *To:* Erlang > *Subject:* [erlang-questions] UDP concurrent server > > > > following the thread > https://groups.google.com/forum/?hl=en#!topic/erlang-programming/6Q3cLtJdwIU > > as it seems that POSt to topic does not work > > > After more tests the basic questions that remains .. > > Is there a way to have more than one process be blocked > in gen_udp:recv/2 call as this seems to not be possible, > probably because the way udp sockets work. > > Sequentially works as expected, but when when I try to spawn another > process > that makes and attempt to execute gen_udp:recv/2 while the first process > already does > gen_udp:recv/2 , the second process gives elready error. This means that 2 > process > cannot concurrently do gen_udp:recv/2 . > > In scenario with socket {active, once} or {active, true} there is only one > process that can > receive the message from socket (the one that does gen_udp:open/2 ), > and for multi-threaded applications this quickly can become a bottleneck. > In this case, however, elready error disappears of course. > . > I tried both variants and both have disavantages. > > Is there an idiom for designing a udp concurrent server in Erlang? > So far, it seems impossible. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergej.jurecko@REDACTED Wed Dec 9 12:16:48 2015 From: sergej.jurecko@REDACTED (=?UTF-8?Q?Sergej_Jure=C4=8Dko?=) Date: Wed, 9 Dec 2015 12:16:48 +0100 Subject: [erlang-questions] UDP concurrent server In-Reply-To: References: Message-ID: On Wed, Dec 9, 2015 at 11:35 AM, Bengt Johansson E < bengt.e.johansson@REDACTED> wrote: > > > Anyway, there is no way the underlying UDP stack and/or erts can make the > decision for you. To which process to send the packet that is. > > > Sure it could. By using the 5 tuple. {SourceIP, SourcePort, DestinationIP, DestinationPort, Protocol} (protocol being either ipv4 or ipv6) If we had gen_udp:accept(Sock) which would bind the 5 tuple to the process, it would make writing UDP servers in erlang so much better. We could have acceptor pools just like TCP. Sergej -------------- next part -------------- An HTML attachment was scrubbed... URL: From bchesneau@REDACTED Wed Dec 9 12:59:38 2015 From: bchesneau@REDACTED (Benoit Chesneau) Date: Wed, 09 Dec 2015 11:59:38 +0000 Subject: [erlang-questions] UDP concurrent server In-Reply-To: References: Message-ID: On Wed, Dec 9, 2015 at 11:07 AM Bogdan Andu wrote: > following the thread > https://groups.google.com/forum/?hl=en#!topic/erlang-programming/6Q3cLtJdwIU > > as it seems that POSt to topic does not work > > After more tests the basic questions that remains .. > > Is there a way to have more than one process be blocked > in gen_udp:recv/2 call as this seems to not be possible, > probably because the way udp sockets work. > > Sequentially works as expected, but when when I try to spawn another > process > that makes and attempt to execute gen_udp:recv/2 while the first process > already does > gen_udp:recv/2 , the second process gives elready error. This means that 2 > process > cannot concurrently do gen_udp:recv/2 . > You can if you tell to the udp socket to reuse the port: https://github.com/refuge/rbeacon/blob/master/src/rbeacon.erl#L414-L425 If you do this any process will be able to reuse it and send/recv to it. - beno?t -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergej.jurecko@REDACTED Wed Dec 9 13:11:24 2015 From: sergej.jurecko@REDACTED (=?UTF-8?Q?Sergej_Jure=C4=8Dko?=) Date: Wed, 9 Dec 2015 13:11:24 +0100 Subject: [erlang-questions] UDP concurrent server In-Reply-To: References: Message-ID: On Wed, Dec 9, 2015 at 12:59 PM, Benoit Chesneau wrote: > > You can if you tell to the udp socket to reuse the port: > https://github.com/refuge/rbeacon/blob/master/src/rbeacon.erl#L414-L425 > > If you do this any process will be able to reuse it and send/recv to it. > > This enables you to call gen_udp:open for port X from multiple processes. Unfortunately as long as first socket is alive, all traffic will go there. So it's just a reliability improvement (if first process goes down), but not a scalability improvement. Sergej -------------- next part -------------- An HTML attachment was scrubbed... URL: From bchesneau@REDACTED Wed Dec 9 13:23:15 2015 From: bchesneau@REDACTED (Benoit Chesneau) Date: Wed, 09 Dec 2015 12:23:15 +0000 Subject: [erlang-questions] UDP concurrent server In-Reply-To: References: Message-ID: On Wed, Dec 9, 2015 at 1:11 PM Sergej Jure?ko wrote: > On Wed, Dec 9, 2015 at 12:59 PM, Benoit Chesneau > wrote: > >> >> You can if you tell to the udp socket to reuse the port: >> https://github.com/refuge/rbeacon/blob/master/src/rbeacon.erl#L414-L425 >> >> If you do this any process will be able to reuse it and send/recv to it. >> >> > This enables you to call gen_udp:open for port X from multiple processes. > Unfortunately as long as first socket is alive, all traffic will go there. > So it's just a reliability improvement (if first process goes down), but > not a scalability improvement. > > Well it would allows you to open different socket for the same ports for recv. Normally the threads should compete to get the data according to https://lwn.net/Articles/542629/ So until a process is on another thread or CPU it should increase the concurrency. - beno?t -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergej.jurecko@REDACTED Wed Dec 9 13:33:27 2015 From: sergej.jurecko@REDACTED (=?UTF-8?Q?Sergej_Jure=C4=8Dko?=) Date: Wed, 9 Dec 2015 13:33:27 +0100 Subject: [erlang-questions] UDP concurrent server In-Reply-To: References: Message-ID: On Wed, Dec 9, 2015 at 1:23 PM, Benoit Chesneau wrote: > > Well it would allows you to open different socket for the same ports for > recv. Normally the threads should compete to get the data according to > > https://lwn.net/Articles/542629/ > > So until a process is on another thread or CPU it should increase the > concurrency. > > Yes that is what the documentation says, but I don't think the actual implementation on darwin/linux works this way. This simple test code will work the same on both platforms and all data is received by the first process: -module(udptest). -export([udp/0]). udp() -> Srvrs = [spawn(fun() -> udpsrv(N) end) || N <- lists:seq(1,10)], timer:sleep(100), [begin spawn(fun() -> udpclient(N) end) end || N <- lists:seq(1,1000)], timer:sleep(1000), [S ! die || S <- Srvrs], ok. udpsrv(N) -> Opts = [{raw, ?SOL_SOCKET, ?SO_REUSEPORT, <<1:32/native>>}, {active,true}, inet, binary, {recbuf,1024*1024}], {ok,S} = gen_udp:open(23232,Opts), io:format("Opened server ~p~n",[N]), udpsrv(N,S). udpsrv(N,S) -> receive {udp, S, _IP, Port, Msg} -> io:format("Received msg=~p, srvid=~p, client_port=~p~n",[Msg, N, Port]), udpsrv(N,S); die -> ok end. udpclient(N) -> {ok,S} = gen_udp:open(0), gen_udp:send(S,{127,0,0,1},23232,["sending from ",butil:tolist(N)]). -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukas@REDACTED Wed Dec 9 13:39:06 2015 From: lukas@REDACTED (Lukas Larsson) Date: Wed, 9 Dec 2015 13:39:06 +0100 Subject: [erlang-questions] ssl_session_cache: trouble + questions In-Reply-To: References: Message-ID: Hello! You did not mention what version of ssl that you are using? If you do not have the latest version, please have a look at the release notes in ssl and see if any of the fixes in there applies to you. There are fixes made in ssl that relate to a very large session cache. Lukas On Tue, Dec 8, 2015 at 1:15 PM, Danil Zagoskin wrote: > Hi! > > Recently our servers started to consume lots of SYS CPU. > Inside a VM top processes (by reductions per second) were ssl session > validators. > Most popular current function in runnable processes was > calendar:datetime_to_gregorian_seconds/2. > Also gproc was very slow (it uses ETS). > > According to `ets:i().` the largest ETS table was: > 49178 server_ssl_otp_session_cache ordered_set 5015080 > 305919839 ssl_manager > > We have worked around the problem by using lower session_lifetime. > > But reading the code I came to these questions: > - The cache is `ordered_set` type which has logarithmic access time. > Does it have to be `ordered_set`, not just `set` (with constant access > time)? > - There is no protection agains running multiple validators. This leads > to many processes accessing single table and doing the same work. This > seems to greatly increase SYS CPU usage and slowdown in other ETS tables. > Should we skip new validator start if previous one is still running? > - ssl_session:valid_session is called for every session in cache and > calls `calendar:datetime_to_gregorian_seconds({date(), time()})` itself. > Should we use `erlang:monotonic_time(seconds)` everywhere instead? Or maybe > we should pre-calculate minimal allowed timestamp to avoid extra > arithmetics? > > > -- > Danil Zagoskin | z@REDACTED > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladimir.ralev@REDACTED Wed Dec 9 14:00:37 2015 From: vladimir.ralev@REDACTED (Vladimir Ralev) Date: Wed, 9 Dec 2015 15:00:37 +0200 Subject: [erlang-questions] gb_trees and dict performance Message-ID: Hi all, I am curious about the O-complexity of the operations in these data structures in Erlang. It would seem to me that because of immutability adding elements in those structures would often require a duplication of the data structure and copying is an O(N) operation itself. So if you insert an element in a dict or tree instead of O(1) or O(logN) it would be O(N) complexity in both time and memory. I realize there are some shortcuts done internally to reuse the memory but I imagine that is not always possible. Two questions: 1. Can someone comment on the average complexity, is it in line with what the collections in other languages offer? 2. If you have a job interview and are asked what is the complexity of dict:store() and gb_trees:enter() what would you say? From bog495@REDACTED Wed Dec 9 14:02:07 2015 From: bog495@REDACTED (Bogdan Andu) Date: Wed, 9 Dec 2015 15:02:07 +0200 Subject: [erlang-questions] UDP concurrent server In-Reply-To: References: Message-ID: Seems a good idea but also a dangerous one. I think I will stay with receiving {udp, Socket, Host, Port, Bin} messages in controlling process and dispatch from there. But now I face another problem... I have : handle_info({udp, Socket, Host, Port, Bin}, State) -> {noreply, State}; In a few minutes the memory allocated to binary increases to ~ 500MB by running the command: fmpeg -f lavfi -i aevalsrc="sin(40*2*PI*t)" -ar 8000 -f mulaw -f rtp rtp:// 10.10.13.104:5004 It seems that the Bins are accumulated in the process memory area, an never are deallocated unless the process is killed, which is not an option. How can I manage this and to avoid running out of memory in a matter of minutes ? If I change the clause to: handle_info({udp1, Socket, Host, Port, Bin}, State) -> memory stays at 190KB. So as long as the process receives the packet, this is accumulated in binary memory are and never deallocated. On Wed, Dec 9, 2015 at 2:23 PM, Benoit Chesneau wrote: > > On Wed, Dec 9, 2015 at 1:11 PM Sergej Jure?ko > wrote: > >> On Wed, Dec 9, 2015 at 12:59 PM, Benoit Chesneau >> wrote: >> >>> >>> You can if you tell to the udp socket to reuse the port: >>> https://github.com/refuge/rbeacon/blob/master/src/rbeacon.erl#L414-L425 >>> >>> If you do this any process will be able to reuse it and send/recv to it. >>> >>> >> This enables you to call gen_udp:open for port X from multiple processes. >> Unfortunately as long as first socket is alive, all traffic will go there. >> So it's just a reliability improvement (if first process goes down), but >> not a scalability improvement. >> >> > Well it would allows you to open different socket for the same ports for > recv. Normally the threads should compete to get the data according to > > https://lwn.net/Articles/542629/ > > So until a process is on another thread or CPU it should increase the > concurrency. > > - beno?t > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergej.jurecko@REDACTED Wed Dec 9 14:05:41 2015 From: sergej.jurecko@REDACTED (=?UTF-8?Q?Sergej_Jure=C4=8Dko?=) Date: Wed, 9 Dec 2015 14:05:41 +0100 Subject: [erlang-questions] UDP concurrent server In-Reply-To: References: Message-ID: What if you run erlang with: -env ERL_FULLSWEEP_AFTER 10 On Wed, Dec 9, 2015 at 2:02 PM, Bogdan Andu wrote: > Seems a good idea but also a dangerous one. > > I think I will stay with receiving {udp, Socket, Host, Port, Bin} messages > in controlling process and dispatch from there. > > But now I face another problem... > I have : > > handle_info({udp, Socket, Host, Port, Bin}, State) -> > {noreply, State}; > > > In a few minutes the memory allocated to binary increases to ~ 500MB > by running the command: > > fmpeg -f lavfi -i aevalsrc="sin(40*2*PI*t)" -ar 8000 -f mulaw -f rtp rtp:// > 10.10.13.104:5004 > > It seems that the Bins are accumulated in the process memory area, an > never are deallocated > unless the process is killed, which is not an option. > > How can I manage this and to avoid running out of memory in a matter of > minutes ? > > If I change the clause to: > handle_info({udp1, Socket, Host, Port, Bin}, State) -> > > memory stays at 190KB. > > So as long as the process receives the packet, this is accumulated in > binary memory are > and never deallocated. > > On Wed, Dec 9, 2015 at 2:23 PM, Benoit Chesneau > wrote: > >> >> On Wed, Dec 9, 2015 at 1:11 PM Sergej Jure?ko >> wrote: >> >>> On Wed, Dec 9, 2015 at 12:59 PM, Benoit Chesneau >>> wrote: >>> >>>> >>>> You can if you tell to the udp socket to reuse the port: >>>> https://github.com/refuge/rbeacon/blob/master/src/rbeacon.erl#L414-L425 >>>> >>>> If you do this any process will be able to reuse it and send/recv to it. >>>> >>>> >>> This enables you to call gen_udp:open for port X from multiple >>> processes. Unfortunately as long as first socket is alive, all traffic will >>> go there. So it's just a reliability improvement (if first process goes >>> down), but not a scalability improvement. >>> >>> >> Well it would allows you to open different socket for the same ports for >> recv. Normally the threads should compete to get the data according to >> >> https://lwn.net/Articles/542629/ >> >> So until a process is on another thread or CPU it should increase the >> concurrency. >> >> - beno?t >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bog495@REDACTED Wed Dec 9 14:16:44 2015 From: bog495@REDACTED (Bogdan Andu) Date: Wed, 9 Dec 2015 15:16:44 +0200 Subject: [erlang-questions] UDP concurrent server In-Reply-To: References: Message-ID: the same, although I 'd prefer not to touch the defaults. entop output: Node: 'mps_dbg@REDACTED' (Disconnected) (18/7.0) unix (linux 4.0.4) CPU:2 SMP +A:10 Time: local time 15:14:15, up for 000:00:01:05, 0ms latency, Processes: total 53 (RQ 0) at 12233 RpI using 4431.6k (4461.1k allocated) Memory: Sys 239281.1k, Atom 189.8k/197.7k, *Bin 230708.1k,* Code 4828.6k, Ets 289.9k 1 minute about 220 MB On Wed, Dec 9, 2015 at 3:05 PM, Sergej Jure?ko wrote: > What if you run erlang with: > > -env ERL_FULLSWEEP_AFTER 10 > > > On Wed, Dec 9, 2015 at 2:02 PM, Bogdan Andu wrote: > >> Seems a good idea but also a dangerous one. >> >> I think I will stay with receiving {udp, Socket, Host, Port, Bin} messages >> in controlling process and dispatch from there. >> >> But now I face another problem... >> I have : >> >> handle_info({udp, Socket, Host, Port, Bin}, State) -> >> {noreply, State}; >> >> >> In a few minutes the memory allocated to binary increases to ~ 500MB >> by running the command: >> >> fmpeg -f lavfi -i aevalsrc="sin(40*2*PI*t)" -ar 8000 -f mulaw -f rtp >> rtp://10.10.13.104:5004 >> >> It seems that the Bins are accumulated in the process memory area, an >> never are deallocated >> unless the process is killed, which is not an option. >> >> How can I manage this and to avoid running out of memory in a matter of >> minutes ? >> >> If I change the clause to: >> handle_info({udp1, Socket, Host, Port, Bin}, State) -> >> >> memory stays at 190KB. >> >> So as long as the process receives the packet, this is accumulated in >> binary memory are >> and never deallocated. >> >> On Wed, Dec 9, 2015 at 2:23 PM, Benoit Chesneau >> wrote: >> >>> >>> On Wed, Dec 9, 2015 at 1:11 PM Sergej Jure?ko >>> wrote: >>> >>>> On Wed, Dec 9, 2015 at 12:59 PM, Benoit Chesneau >>>> wrote: >>>> >>>>> >>>>> You can if you tell to the udp socket to reuse the port: >>>>> https://github.com/refuge/rbeacon/blob/master/src/rbeacon.erl#L414-L425 >>>>> >>>>> If you do this any process will be able to reuse it and send/recv to >>>>> it. >>>>> >>>>> >>>> This enables you to call gen_udp:open for port X from multiple >>>> processes. Unfortunately as long as first socket is alive, all traffic will >>>> go there. So it's just a reliability improvement (if first process goes >>>> down), but not a scalability improvement. >>>> >>>> >>> Well it would allows you to open different socket for the same ports for >>> recv. Normally the threads should compete to get the data according to >>> >>> https://lwn.net/Articles/542629/ >>> >>> So until a process is on another thread or CPU it should increase the >>> concurrency. >>> >>> - beno?t >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bengt.e.johansson@REDACTED Wed Dec 9 14:25:35 2015 From: bengt.e.johansson@REDACTED (Bengt Johansson E) Date: Wed, 9 Dec 2015 13:25:35 +0000 Subject: [erlang-questions] UDP concurrent server In-Reply-To: References: Message-ID: Hi Bogdan! I?m sure you understand the difference between TCP and UDP, I just wanted to highlight it for the discussion. ? Given that I don?t know your application, it is difficult to give specific advice. But let?s assume that your application involves sending messages using an application specific protocol to your server. Let?s also assume that each client needs to communicate with the server more than once, so we essentially have a connection between the clients and the server. In that case you will have a dispatcher process like you mention that sends packets along to the processors. This could be round-robin, random or based on some identifier in the packet. In fact, the concurrent TCP server is implemented using that pattern as well, but it hands off the classification of connections to the TCP/IP stack by using different ports for each connection. I agree that it would be kind of nifty for some applications to allow the UDP stack to spread incoming packets to more than one receiver, but that would be limited to random or round-robin distribution or some similar simple method since it doesn?t understand the contents of the packet. Unfortunately, the dispatcher ?pattern? puts a cap on the maximum possible concurrency. Amdahl?s law states that the maximum possible concurrency in a parallel system is limited by the sequential parts of the system since they will be saturated and then the parallel parts will just idle. So if the time spent in the sequential part (the dispatcher) is 10% per packet then you will get a maximum 10-fold increase in speed if you go parallel. If you spend 1% there you will get a 100-fold increase and so on. Assuming that you application does a lot of computing for each packet (or waits for disk etc.) you are likely not going to run into that problem, but if you are implementing something that does very little to the packets (like a router or firewall) this might be a problem for you. Note that I reason in terms of processing time, I don?t see why you can?t have millions of open connections if they do not communicate too much. Erlang is an excellent language for that kind of applications. As for over-load situations, the UDP stack will handle that for you by dropping packets when it?s buffers run full. If that is a problem your application protocol will have to handle packet retransmits by itself. Good luck with your project! BR, BengtJ From: Bogdan Andu [mailto:bog495@REDACTED] Sent: den 9 december 2015 12:00 To: Bengt Johansson E Cc: Erlang Subject: Re: [erlang-questions] UDP concurrent server Hi Bengt, Yes I am aware that udp sockets are working different than tcp sockets, but I was wondering if there is an idiom for udp as it is for tcp to make things concurrent. Having in mind the limitations of udp it seems that the best option is to implement the version with a controller process that receives udp packets and in spawning processes to actually handle that packet based some info, and immediately fetch the next message from message box. My only concern here is the bottleneck. The message box can easily be overloaded and the response of the server exponentially delayed. Is there any best practices to handle such situations avoiding message box overloading while handling say 1 million of concurrent (udp) connections ? Thank you, /Bogdan On Wed, Dec 9, 2015 at 12:35 PM, Bengt Johansson E > wrote: Hi Bogdan! Your questions make me a bit confused. I wonder *why* you want two processes waiting for packets from the same socket and *what* you expect to happen? Generally one usually only speaks about concurrent servers regarding TCP where you bind to a socket waiting for incoming connections and once a connection is established, you spawn a process (or thread depending on language) and process the data coming over the connection concurrently. Note that the main loop of the server that waits for responses is always sequential. The new process handling the connection gets a new socket with a free port used only for that particular connection. But! UDP lacks support for connections mainly since it is a message based protocol and hence is devoid of any connection abstraction ? Still the question remains what is a concurrent UDP server? I guess what you want to achieve is some kind of distribution of incoming packets to several processes to handle them. In that case you should either go for the solution of TCP to set up new socket for each communication or write a simple process that classifies the incoming packets and distributes them to the correct process based on some information in the packet, fi. Some identifier you have chosen to identify the connection. Basically you have to implement an application specific load balancer ? or rather load distributor ? hoping that the time taken to actually process the packets greatly overshadows the time it takes to distribute them. ? Anyway, there is no way the underlying UDP stack and/or erts can make the decision for you. To which process to send the packet that is. Hope that helps! BR, BengtJ From: erlang-questions-bounces@REDACTED [mailto:erlang-questions-bounces@REDACTED] On Behalf Of Bogdan Andu Sent: den 9 december 2015 10:15 To: Erlang Subject: [erlang-questions] UDP concurrent server following the thread https://groups.google.com/forum/?hl=en#!topic/erlang-programming/6Q3cLtJdwIU as it seems that POSt to topic does not work After more tests the basic questions that remains .. Is there a way to have more than one process be blocked in gen_udp:recv/2 call as this seems to not be possible, probably because the way udp sockets work. Sequentially works as expected, but when when I try to spawn another process that makes and attempt to execute gen_udp:recv/2 while the first process already does gen_udp:recv/2 , the second process gives elready error. This means that 2 process cannot concurrently do gen_udp:recv/2 . In scenario with socket {active, once} or {active, true} there is only one process that can receive the message from socket (the one that does gen_udp:open/2 ), and for multi-threaded applications this quickly can become a bottleneck. In this case, however, elready error disappears of course. . I tried both variants and both have disavantages. Is there an idiom for designing a udp concurrent server in Erlang? So far, it seems impossible. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bog495@REDACTED Wed Dec 9 14:54:48 2015 From: bog495@REDACTED (Bogdan Andu) Date: Wed, 9 Dec 2015 15:54:48 +0200 Subject: [erlang-questions] UDP concurrent server In-Reply-To: References: Message-ID: I am trying to build a rtp relay server for real time media transfer between clients. only rtp/rtpc, not sip/sds On Wed, Dec 9, 2015 at 3:25 PM, Bengt Johansson E < bengt.e.johansson@REDACTED> wrote: > Hi Bogdan! > > > > I?m sure you understand the difference between TCP and UDP, I just wanted > to highlight it for the discussion. J > > > > Given that I don?t know your application, it is difficult to give specific > advice. But let?s assume that your application involves sending messages > using an application specific protocol to your server. Let?s also assume > that each client needs to communicate with the server more than once, so we > essentially have a connection between the clients and the server. > > > > In that case you will have a dispatcher process like you mention that > sends packets along to the processors. This could be round-robin, random or > based on some identifier in the packet. In fact, the concurrent TCP server > is implemented using that pattern as well, but it hands off the > classification of connections to the TCP/IP stack by using different ports > for each connection. > > > > I agree that it would be kind of nifty for some applications to allow the > UDP stack to spread incoming packets to more than one receiver, but that > would be limited to random or round-robin distribution or some similar > simple method since it doesn?t understand the contents of the packet. > > > > Unfortunately, the dispatcher ?pattern? puts a cap on the maximum possible > concurrency. Amdahl?s law states that the maximum possible concurrency in a > parallel system is limited by the sequential parts of the system since they > will be saturated and then the parallel parts will just idle. So if the > time spent in the sequential part (the dispatcher) is 10% per packet then > you will get a maximum 10-fold increase in speed if you go parallel. If you > spend 1% there you will get a 100-fold increase and so on. > > > > Assuming that you application does a lot of computing for each packet (or > waits for disk etc.) you are likely not going to run into that problem, but > if you are implementing something that does very little to the packets > (like a router or firewall) this might be a problem for you. > > > > Note that I reason in terms of processing time, I don?t see why you can?t > have millions of open connections if they do not communicate too much. > Erlang is an excellent language for that kind of applications. > > > > As for over-load situations, the UDP stack will handle that for you by > dropping packets when it?s buffers run full. If that is a problem your > application protocol will have to handle packet retransmits by itself. > > > > Good luck with your project! > > > > BR, BengtJ > > > > > > *From:* Bogdan Andu [mailto:bog495@REDACTED] > *Sent:* den 9 december 2015 12:00 > *To:* Bengt Johansson E > *Cc:* Erlang > *Subject:* Re: [erlang-questions] UDP concurrent server > > > > Hi Bengt, > > Yes I am aware that udp sockets are working different than tcp sockets, > > but I was wondering if there is an idiom for udp as it is for tcp to make > things concurrent. > > Having in mind the limitations of udp it seems that the best option > > is to implement the version with a controller process that receives udp > packets > and in spawning processes to actually handle that packet based some info, > and immediately > > fetch the next message from message box. > > My only concern here is the bottleneck. The message box can easily be > overloaded > and the response of the server exponentially delayed. > > Is there any best practices to handle such situations avoiding message box > overloading > > while handling say 1 million of concurrent (udp) connections ? > > Thank you, > > /Bogdan > > > > > > > > On Wed, Dec 9, 2015 at 12:35 PM, Bengt Johansson E < > bengt.e.johansson@REDACTED> wrote: > > Hi Bogdan! > > > > Your questions make me a bit confused. I wonder **why** you want two > processes waiting for packets from the same socket and **what** you > expect to happen? > > > > Generally one usually only speaks about concurrent servers regarding TCP > where you bind to a socket waiting for incoming connections and once a > connection is established, you spawn a process (or thread depending on > language) and process the data coming over the connection concurrently. > > Note that the main loop of the server that waits for responses is always > sequential. The new process handling the connection gets a new socket with > a free port used only for that particular connection. > > > > But! UDP lacks support for connections mainly since it is a message based > protocol and hence is devoid of any connection abstraction J > > > > Still the question remains what is a concurrent UDP server? I guess what > you want to achieve is some kind of distribution of incoming packets to > several processes to handle them. In that case you should either go for the > solution of TCP to set up new socket for each communication or write a > simple process that classifies the incoming packets and distributes them to > the correct process based on some information in the packet, fi. Some > identifier you have chosen to identify the connection. Basically you have > to implement an application specific load balancer ? or rather load > distributor ? hoping that the time taken to actually process the packets > greatly overshadows the time it takes to distribute them. J > > > > Anyway, there is no way the underlying UDP stack and/or erts can make the > decision for you. To which process to send the packet that is. > > > > Hope that helps! > > BR, BengtJ > > > > > > > > > > > > > > *From:* erlang-questions-bounces@REDACTED [mailto: > erlang-questions-bounces@REDACTED] *On Behalf Of *Bogdan Andu > *Sent:* den 9 december 2015 10:15 > *To:* Erlang > *Subject:* [erlang-questions] UDP concurrent server > > > > following the thread > https://groups.google.com/forum/?hl=en#!topic/erlang-programming/6Q3cLtJdwIU > > as it seems that POSt to topic does not work > > > After more tests the basic questions that remains .. > > Is there a way to have more than one process be blocked > in gen_udp:recv/2 call as this seems to not be possible, > probably because the way udp sockets work. > > Sequentially works as expected, but when when I try to spawn another > process > that makes and attempt to execute gen_udp:recv/2 while the first process > already does > gen_udp:recv/2 , the second process gives elready error. This means that 2 > process > cannot concurrently do gen_udp:recv/2 . > > In scenario with socket {active, once} or {active, true} there is only one > process that can > receive the message from socket (the one that does gen_udp:open/2 ), > and for multi-threaded applications this quickly can become a bottleneck. > In this case, however, elready error disappears of course. > . > I tried both variants and both have disavantages. > > Is there an idiom for designing a udp concurrent server in Erlang? > So far, it seems impossible. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bchesneau@REDACTED Wed Dec 9 14:58:34 2015 From: bchesneau@REDACTED (Benoit Chesneau) Date: Wed, 09 Dec 2015 13:58:34 +0000 Subject: [erlang-questions] UDP concurrent server In-Reply-To: References: Message-ID: @Bogdan you can use binary:copy to reduce the memory usage in your case. It all depends on your code I guess. - benoit On Wed, Dec 9, 2015 at 2:16 PM Bogdan Andu wrote: > the same, although I 'd prefer not to touch the defaults. > > entop output: > Node: 'mps_dbg@REDACTED' (Disconnected) (18/7.0) unix (linux 4.0.4) > CPU:2 SMP +A:10 > Time: local time 15:14:15, up for 000:00:01:05, 0ms latency, > Processes: total 53 (RQ 0) at 12233 RpI using 4431.6k (4461.1k allocated) > Memory: Sys 239281.1k, Atom 189.8k/197.7k, *Bin 230708.1k,* Code 4828.6k, > Ets 289.9k > > 1 minute about 220 MB > > > On Wed, Dec 9, 2015 at 3:05 PM, Sergej Jure?ko > wrote: > >> What if you run erlang with: >> >> -env ERL_FULLSWEEP_AFTER 10 >> >> >> On Wed, Dec 9, 2015 at 2:02 PM, Bogdan Andu wrote: >> >>> Seems a good idea but also a dangerous one. >>> >>> I think I will stay with receiving {udp, Socket, Host, Port, Bin} >>> messages >>> in controlling process and dispatch from there. >>> >>> But now I face another problem... >>> I have : >>> >>> handle_info({udp, Socket, Host, Port, Bin}, State) -> >>> {noreply, State}; >>> >>> >>> In a few minutes the memory allocated to binary increases to ~ 500MB >>> by running the command: >>> >>> fmpeg -f lavfi -i aevalsrc="sin(40*2*PI*t)" -ar 8000 -f mulaw -f rtp >>> rtp://10.10.13.104:5004 >>> >>> It seems that the Bins are accumulated in the process memory area, an >>> never are deallocated >>> unless the process is killed, which is not an option. >>> >>> How can I manage this and to avoid running out of memory in a matter of >>> minutes ? >>> >>> If I change the clause to: >>> handle_info({udp1, Socket, Host, Port, Bin}, State) -> >>> >>> memory stays at 190KB. >>> >>> So as long as the process receives the packet, this is accumulated in >>> binary memory are >>> and never deallocated. >>> >>> On Wed, Dec 9, 2015 at 2:23 PM, Benoit Chesneau >>> wrote: >>> >>>> >>>> On Wed, Dec 9, 2015 at 1:11 PM Sergej Jure?ko >>>> wrote: >>>> >>>>> On Wed, Dec 9, 2015 at 12:59 PM, Benoit Chesneau >>>>> wrote: >>>>> >>>>>> >>>>>> You can if you tell to the udp socket to reuse the port: >>>>>> >>>>>> https://github.com/refuge/rbeacon/blob/master/src/rbeacon.erl#L414-L425 >>>>>> >>>>>> If you do this any process will be able to reuse it and send/recv to >>>>>> it. >>>>>> >>>>>> >>>>> This enables you to call gen_udp:open for port X from multiple >>>>> processes. Unfortunately as long as first socket is alive, all traffic will >>>>> go there. So it's just a reliability improvement (if first process goes >>>>> down), but not a scalability improvement. >>>>> >>>>> >>>> Well it would allows you to open different socket for the same ports >>>> for recv. Normally the threads should compete to get the data according to >>>> >>>> https://lwn.net/Articles/542629/ >>>> >>>> So until a process is on another thread or CPU it should increase the >>>> concurrency. >>>> >>>> - beno?t >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ola.a.andersson@REDACTED Tue Dec 8 17:04:48 2015 From: ola.a.andersson@REDACTED (Ola Andersson A) Date: Tue, 8 Dec 2015 16:04:48 +0000 Subject: [erlang-questions] Thank you for 17 years of Erlang/OTP In-Reply-To: <18E579C2-2F8F-4205-85DE-637E29232890@pixie.co.za> References: <18E579C2-2F8F-4205-85DE-637E29232890@pixie.co.za> Message-ID: <9DEFCD8EB8743E4EA623A12F453B66FC26543DE4@ESESSMB207.ericsson.se> Erlang has kept me in business for the last ~25 years and it would never have lasted without the open source. Big thanks to everybody involved. Coincidentally, for those of you who's been around since Bjarne D?cker wrote his thesis, today there is another event worth celebrating. The Hallands?s tunnel is finally officially opened! What crisis? /OLA. From: erlang-questions-bounces@REDACTED [mailto:erlang-questions-bounces@REDACTED] On Behalf Of Valentin Micic Sent: den 8 december 2015 12:18 To: Jos? Valim Cc: Questions erlang-questions Subject: Re: [erlang-questions] Thank you for 17 years of Erlang/OTP Hear, hear! Erlang changed the way I think about programming, and indeed, transformed my career by leading me out of C/C++ caves. Thank you Ericsson for having a courage to release it to a general public, whilst still maintaining a meaningful level of control. Not everyone may agree with this, but, for what is worth, I think this has to be a sign of an exceptional company. V/ On 08 Dec 2015, at 12:55 PM, Jos? Valim wrote: Well said Vance! I have been using Erlang and OTP for almost 6 years and it is always a pleasure. Thanks to the OTP team for the amazing work! Jos? Valim www.plataformatec.com.br Skype: jv.ptec Founder and Director of R&D On Tue, Dec 8, 2015 at 5:16 AM, Vance Shipley > wrote: It was seventeen years ago today that Erlang/OTP was released as open source. On this occasion I offer my heartfelt thanks to the OTP team for their fantastic contribution and support to the community. http://web.archive.org/web/19991009002753/www.erlang.se/onlinenews/ErlangOTPos.shtml _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From mononcqc@REDACTED Wed Dec 9 15:27:57 2015 From: mononcqc@REDACTED (Fred Hebert) Date: Wed, 9 Dec 2015 09:27:57 -0500 Subject: [erlang-questions] UDP concurrent server In-Reply-To: References: Message-ID: <20151209142755.GI886@fhebert-ltm1> On 12/09, Bogdan Andu wrote: >handle_info({udp, Socket, Host, Port, Bin}, State) -> > {noreply, State}; > >In a few minutes the memory allocated to binary increases to ~ 500MB >by running the command: > >fmpeg -f lavfi -i aevalsrc="sin(40*2*PI*t)" -ar 8000 -f mulaw -f rtp rtp:// >10.10.13.104:5004 > >It seems that the Bins are accumulated in the process memory area, an never >are deallocated >unless the process is killed, which is not an option. So there's two ways about that. If the size of each binary is > 64 bits, then the binary is moved to a global shared heap, and is only collected once no process at all has a reference to it anymore. This takes a few forms: 1. either the process receiving the data and passing it on holds on to it inadvertently by not garbage-collecting 2. the processes the messages would be forwarded to are keeping a copy. 3. the process isn't keeping up In this case, it would be weird for it to be blameable on 2) and 3) since the snippet above does not forward data, and because "not keeping up" would affect both forms the same (the following one, I mean:) >If I change the clause to: >handle_info({udp1, Socket, Host, Port, Bin}, State) -> > >So as long as the process receives the packet, this is accumulated in >binary memory are >and never deallocated. So the interesting bit there is are you sure the memory isn't just going elsewhere? That you're doing nothing with the process? Not matching on the message does not mean the message isn't read. Any receive operation (or most of them, as far as I can tell) work by taking the messages in the mailbox, copying them to the process heap, potentially running a GC, and then getting the message. Simply not matching the message does not take it out of the mailbox; in fact I would expect bigger leaks with that clause, unless you have a second one that broadly matches on all messages and discards them. Then the problem would point at being about what you do with the message. The thing I would check is: a) are you keeping more references than you think in what you do, or is >handle_info({udp, Socket, Host, Port, Bin}, State) -> > {noreply, State}; really the full clause? Another option can be to return {noreply, State, hibernate} from time to time, which will force a full GC and recompaction of the process memory. If the memory isn't shed away after that, then you know it's being either still referenced by the process, or has another process referencing it. Regards, Fred. From jesper.louis.andersen@REDACTED Wed Dec 9 15:31:46 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Wed, 9 Dec 2015 15:31:46 +0100 Subject: [erlang-questions] gb_trees and dict performance In-Reply-To: References: Message-ID: On Wed, Dec 9, 2015 at 2:00 PM, Vladimir Ralev wrote: > I am curious about the O-complexity of the operations in these data > structures in Erlang. > maps(), gb_trees and dict are all O(lg n) in complexity for insertion/lookup. They all implement trie/tree-like structures, and this allows them to exploit a nice observation: in an immutable data structure, we don't need to copy the whole structure. We can simply re-use the parts of the tree which has *not* changed. Thus, insertion will only replace the "path" from the inserted node to the top-level with the remaining parts of the tree unchanged. In a simple binary tree with {node, Left, X, Right}, we could insert on the Right and once the recursion returns, we would store {node, Left, X, NewRight}. Note that Left never changed, so we can reuse that pointer in the newly formed tree. Another way of seeing it is like Git refers to older unchanged files in newer commits when possible, thus avoiding copying the files. The dict module uses a far wider branching factor than gb_trees. It trades faster lookup times for more expensive insertions and more memory pressure. The maps() use a Hash-Array-Mapped-Trie structure (Phil Bagwell, Ideal Hash trees) which is among the most advanced structures we know. The structure is a blend of a hash-table and a mapped trie, getting good properties from both. Access is often very few memory reads (on the order of 5 for 4 billion elements), and insertion is fast - while at the same time being memory efficient and persistent. It is the same data structure Clojure uses for its map construction. In general, the persistence property can be exploited in Erlang programs. Some times it is *more* efficient than manipulation of data through ephemeral (destructive) updates. Due to immutability, you can pass large data structures as pointers without worrying about the caller changing them. It also removes the notion of pass-by-value/pass-by-reference: every data structure is passed by value, since there is no way to change it. But the runtime will often pass by reference automatically if beneficial :) -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mononcqc@REDACTED Wed Dec 9 16:19:02 2015 From: mononcqc@REDACTED (Fred Hebert) Date: Wed, 9 Dec 2015 10:19:02 -0500 Subject: [erlang-questions] gb_trees and dict performance In-Reply-To: References: Message-ID: <20151209151900.GJ886@fhebert-ltm1> On 12/09, Vladimir Ralev wrote: >I am curious about the O-complexity of the operations in these data >structures in Erlang. It would seem to me that because of immutability >adding elements in those structures would often require a duplication >of the data structure and copying is an O(N) operation itself. So if >you insert an element in a dict or tree instead of O(1) or O(logN) it >would be O(N) complexity in both time and memory. I realize there are >some shortcuts done internally to reuse the memory but I imagine that >is not always possible. > >Two questions: > >1. Can someone comment on the average complexity, is it in line with >what the collections in other languages offer? > Because the data structures are immutable, you are awarded the possibility of greater structural sharing. When you update the tree, nodes that are in common are kept, and otherwise part of the tree points to the old one, giving you something a bit like this in memory: http://i.imgur.com/FxFFxuP.png Basically, the entire unchanged subtree starting at `B' is kept for both `Old' and `New' trees, but the rest changes in some way. The `H' node is also shared because it had no reason to change. The `F' node had to be copied and rewritten to allow to change what child it has, but note that if the content of `F' is large, that can be shared too. I.e. if my node is `{Current, Left, Right}', I need to change the node, but could still point to the existing `Current' value from the older tree. Garbage collection will later determine what can be reused or not. This lets you update trees in O(log n) memory as well as time, rather than O(n) memory if no sharing took place and deep copies were needed, regardless of the content of the nodes. >2. If you have a job interview and are asked what is the complexity of >dict:store() and gb_trees:enter() what would you say? gb_trees:enter/3 would be O(log N). It's a general balanced tree, and that's as safe of a guess as any other. It has a branching factor of two. Dicts are trickier, as they rely on a nested bucket approach. The bucket approach is basically going to be one gigantic tuple at the top, linking to many smaller tuples at the bottom. The tuples at the bottom will contain lists of elements that match each hash. So for example, for a dictionary with 1,000,000 keys, you may get a top tuple with about ~16k elements in it, each of which is a tuple containing anywhere between 16 to 32 elements. The hashmap is implemented by going down in these, and contracting or expanding them as needed. You're never really going to do more than two hops down before finding the list element you need (and then possibly going down the whole thing) so it would be tempting to declare it to be O(1) for reads (I'm doing all of this very informally), but for writes, you still have to modify the bottom slot, and then copy and recreate a new top-level tuple, which is costly, (even more so if you expand and have to re-hash). Creating tuples are cheap, but the bigger it is, the costlier it is to do to reference elements from the old one to the new one as there is more of them. So the weird thing is that this would intuitively be an O(1) insert, but in practice, the cost of doing an update in the tree increases as the number of buckets augments with the dictionary size, which would be O(log N) with a branching factor of 16 or so. Anyway that would be my guess given, the lowest tuple is between 16 and 32 elements, and that size is what divides up the total number of entries to restrict the top-level tuple. But the actual costs are a bit worse than that in terms of how they scale up, probably due to theory meeting the real world. Here's the measured cost of adding a single element in a dict as a very unscientific benchmark on my work laptop: Elements in Tree vs. Time to insert (?s) - 10: 3?s - 100: 4?s - 1000: 7?s - 10000: 8?s - 100000: 11?s - 1000000: 147?s - 10000000: just creating the base dictionary takes so long I got fed up Regards, Fred. From bog495@REDACTED Wed Dec 9 16:46:41 2015 From: bog495@REDACTED (Bogdan Andu) Date: Wed, 9 Dec 2015 17:46:41 +0200 Subject: [erlang-questions] UDP concurrent server In-Reply-To: <20151209142755.GI886@fhebert-ltm1> References: <20151209142755.GI886@fhebert-ltm1> Message-ID: the so called controller is a simple gen_server and is atarted by a supervisor when application starts supervisor snippet: init([]) -> [{port, Port}] = ets:lookup(config, port), [{listen, IPv4}] = ets:lookup(config, listen), MpsConn = {mps_controller,{mps_controller, start_link, [Port, IPv4]}, temporary, 5000, worker, [mps_controller]}, {ok,{{one_for_all, 3, 10}, [MpsConn]}}. and controller: -module(mps_controller). -behaviour(gen_server). -include("../include/mps.hrl"). -export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]). %% ==================================================================== %% API functions %% ==================================================================== -export([start_link/2, stop/0]). %% ==================================================================== %% Behavioural functions %% ==================================================================== %% -record(state, {}). start_link(Port, Ip) -> gen_server:start_link(?MODULE, [Port, Ip] []). stop() -> gen_server:call(?MODULE, stop). %% init/1 %% ==================================================================== %% @doc gen_server:init/1 -spec init(Args :: term()) -> Result when Result :: {ok, State} | {ok, State, Timeout} | {ok, State, hibernate} | {stop, Reason :: term()} | ignore, State :: term(), Timeout :: non_neg_integer() | infinity. %% ==================================================================== init([Port, Ip]) -> process_flag(trap_exit, true), {ok, Sock} = gen_udp:open(Port, [binary, {active, false}, {reuseaddr, true}, {ip, Ip} ]), {ok, #udp_conn_state{sock = Sock}, 0}. %% handle_call/3 %% ==================================================================== %% @doc gen_server:handle_call/3 -spec handle_call(Request :: term(), From :: {pid(), Tag :: term()}, State :: term()) -> Result when Result :: {reply, Reply, NewState} | {reply, Reply, NewState, Timeout} | {reply, Reply, NewState, hibernate} | {noreply, NewState} | {noreply, NewState, Timeout} | {noreply, NewState, hibernate} | {stop, Reason, Reply, NewState} | {stop, Reason, NewState}, Reply :: term(), NewState :: term(), Timeout :: non_neg_integer() | infinity, Reason :: term(). %% ==================================================================== handle_call(Request, From, State) -> Reply = ok, {reply, Reply, State}. %% handle_cast/2 %% ==================================================================== %% @doc gen_server:handle_cast/2 -spec handle_cast(Request :: term(), State :: term()) -> Result when Result :: {noreply, NewState} | {noreply, NewState, Timeout} | {noreply, NewState, hibernate} | {stop, Reason :: term(), NewState}, NewState :: term(), Timeout :: non_neg_integer() | infinity. %% ==================================================================== handle_cast(Msg, State) -> {noreply, State}. %% handle_info/2 %% ==================================================================== %% @doc gen_server:handle_info/2 -spec handle_info(Info :: timeout | term(), State :: term()) -> Result when Result :: {noreply, NewState} | {noreply, NewState, Timeout} | {noreply, NewState, hibernate} | {stop, Reason :: term(), NewState}, NewState :: term(), Timeout :: non_neg_integer() | infinity. %% ==================================================================== handle_info({udp, Socket, Host, Port, Bin}, State) -> {noreply, State, 1}; handle_info(timeout, #udp_conn_state{sock = Sock} = State) -> inet:setopts(Sock, [{active, once}]), {noreply, State}; handle_info(Info, State) -> {noreply, State}. %% terminate/2 %% ==================================================================== %% @doc gen_server:terminate/2 -spec terminate(Reason, State :: term()) -> Any :: term() when Reason :: normal | shutdown | {shutdown, term()} | term(). %% ==================================================================== terminate(Reason, State) -> ok. %% code_change/3 %% ==================================================================== %% @doc gen_server:code_change/3 -spec code_change(OldVsn, State :: term(), Extra :: term()) -> Result when Result :: {ok, NewState :: term()} | {error, Reason :: term()}, OldVsn :: Vsn | {down, Vsn}, Vsn :: term(). %% ==================================================================== code_change(OldVsn, State, Extra) -> {ok, State}. On Wed, Dec 9, 2015 at 4:27 PM, Fred Hebert wrote: > On 12/09, Bogdan Andu wrote: > >> handle_info({udp, Socket, Host, Port, Bin}, State) -> >> {noreply, State}; >> >> In a few minutes the memory allocated to binary increases to ~ 500MB >> by running the command: >> >> fmpeg -f lavfi -i aevalsrc="sin(40*2*PI*t)" -ar 8000 -f mulaw -f rtp >> rtp:// >> 10.10.13.104:5004 >> >> It seems that the Bins are accumulated in the process memory area, an >> never >> are deallocated >> unless the process is killed, which is not an option. >> > > So there's two ways about that. If the size of each binary is > 64 bits, > then the binary is moved to a global shared heap, and is only collected > once no process at all has a reference to it anymore. > > This takes a few forms: > > 1. either the process receiving the data and passing it on holds on to it > inadvertently by not garbage-collecting > 2. the processes the messages would be forwarded to are keeping a copy. > 3. the process isn't keeping up > > In this case, it would be weird for it to be blameable on 2) and 3) since > the snippet above does not forward data, and because "not keeping up" would > affect both forms the same (the following one, I mean:) > > If I change the clause to: >> handle_info({udp1, Socket, Host, Port, Bin}, State) -> >> >> So as long as the process receives the packet, this is accumulated in >> binary memory are >> and never deallocated. >> > > So the interesting bit there is are you sure the memory isn't just going > elsewhere? That you're doing nothing with the process? Not matching on the > message does not mean the message isn't read. Any receive operation (or > most of them, as far as I can tell) work by taking the messages in the > mailbox, copying them to the process heap, potentially running a GC, and > then getting the message. > > Simply not matching the message does not take it out of the mailbox; in > fact I would expect bigger leaks with that clause, unless you have a second > one that broadly matches on all messages and discards them. > > Then the problem would point at being about what you do with the message. > > The thing I would check is: > a) are you keeping more references than you think in what you do, or is > >> handle_info({udp, Socket, Host, Port, Bin}, State) -> >> {noreply, State}; >> > > really the full clause? > > Another option can be to return {noreply, State, hibernate} from time to > time, which will force a full GC and recompaction of the process memory. > If the memory isn't shed away after that, then you know it's being either > still referenced by the process, or has another process referencing it. > > Regards, > Fred. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Wed Dec 9 16:50:05 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Wed, 9 Dec 2015 16:50:05 +0100 Subject: [erlang-questions] gb_trees and dict performance In-Reply-To: <20151209151900.GJ886@fhebert-ltm1> References: <20151209151900.GJ886@fhebert-ltm1> Message-ID: On Wed, Dec 9, 2015 at 4:19 PM, Fred Hebert wrote: > Dicts are trickier, as they rely on a nested bucket approach. After some IRC-discussion both Fred and I agree dicts are O(1) lookup and follows the structure Fred laid out here. I was swapping the array module for the dict module in my mind. -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mononcqc@REDACTED Wed Dec 9 17:13:24 2015 From: mononcqc@REDACTED (Fred Hebert) Date: Wed, 9 Dec 2015 11:13:24 -0500 Subject: [erlang-questions] UDP concurrent server In-Reply-To: References: <20151209142755.GI886@fhebert-ltm1> Message-ID: <20151209161321.GK886@fhebert-ltm1> On 12/09, Bogdan Andu wrote: >init([Port, Ip]) -> > process_flag(trap_exit, true), > > {ok, Sock} = gen_udp:open(Port, [binary, > {active, false}, > {reuseaddr, true}, > {ip, Ip} > ]), > > {ok, #udp_conn_state{sock = Sock}, 0}. >handle_info({udp, Socket, Host, Port, Bin}, State) -> > {noreply, State, 1}; > >handle_info(timeout, #udp_conn_state{sock = Sock} = State) -> > inet:setopts(Sock, [{active, once}]), > {noreply, State}; > >handle_info(Info, State) -> > {noreply, State}. > Uh, interesting. So one thing I'd change early would be to go: handle_info({udp, Socket, Host, Port, Bin}, State=#udp_conn_state{sock = Sock}) -> inet:setopts(Sock, [{active,once}]), {noreply, State}; (and do the same in `init/1') This at least would let yo consume information much faster by avoiding manual 1 ms sleeps everywhere. Even a `0' value may help. It doesn't explain the leaking at all though. What would explain it is that if you're not matching the message (as in your original email), then you never set the socket to 'active' again, and you never receive traffic. If that's the problem, then you have been measuring a process that receives traffic to a process that does not. It certainly would explain the bad behaviour you've seen. If you want to try to see if a garbage collection would help, you can try the 'recon' library and call 'recon:bin_leak(10)' and it will take a snapshot, run a GC on all processes, then return you those that lost the most memory. If yours is in there, then adding 'hibernate' calls from time to time (say, every 10,000 packets) could help keep things clean. It sucks, but that might be what is needed if the shape of your data is not amenable to clean GCs. If that doesn't solve it, then we get far funner problems with memory allocation. From bog495@REDACTED Tue Dec 8 16:34:27 2015 From: bog495@REDACTED (Bogdan Andu) Date: Tue, 8 Dec 2015 17:34:27 +0200 Subject: [erlang-questions] UDP socket - ealready ERROR In-Reply-To: <5666F638.1010709@gandrade.net> References: <5666F638.1010709@gandrade.net> Message-ID: Hi, gen_udp:close(S) remained there from some tests and indeed has no place there. gen_udp:open is not blocking.. it is like listen in tcp On Tue, Dec 8, 2015 at 5:24 PM, Guilherme Andrade wrote: > Hi, > > On 08/12/15 11:18, Bogdan Andu wrote: > > [...] > > MpsConn = {mps_conn_fsm,{mps_conn_fsm, start_link, [Sock, > SslRecvTimeout, false]}, > temporary, 5000, worker, [mps_conn_fsm]}, > > {ok, {{simple_one_for_one, 0, 1}, [MpsConn]}}. > > [...] > > > mps_dbg@REDACTED)1> > (mps_dbg@REDACTED)1> mps_conn_sup:start_child(). > {ok,<0.62.0>} > (mps_dbg@REDACTED)2> mps_conn_sup:start_child(). > {ok,<0.64.0>} > > > Here is the culprit: you're binding the socket only *once* in the > supervisor[1], which will be its controlling process, and then launching > two workers which will try both to read from the same socket (which they > cannot do because they don't control it) and then close it (which, if > execution were to even reach that point, wouldn't also be what I imagine > you most likely intend because you would end up closing the same socket > twice.) > > One solution is to remove the socket from the child spec and move the > socket binding to the worker code. > > In any case, if it were me, I would first try to have a single binding > process which receives the datagrams and launches workers, and avoid > overload by combining the {active, N}[2] flow with whichever backpressure > metric your system would signal; there's no particular advantage to having > multiple bindings over the same port - you won't really end up processing > multiple flows at once as if they were multiple gen_tcp accepted sockets. > > On a final note, I would also advise against executing potentially > blocking operations on OTP processes' init/1 and make it asynchronous > instead (e.g. by casting an initialisation request to itself) or you'll > risk choking up the supervision tree. > > > [1]: http://www.erlang.org/doc/man/supervisor.html#id242517 > [2]: http://www.erlang.org/doc/man/inet.html#setopts-2 > > -- > Guilherme > http://www.gandrade.net/ > PGP: 0x602B2AD8 / B348 C976 CCE1 A02A 017E 4649 7A6E B621 602B 2AD8 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From serge@REDACTED Wed Dec 9 17:36:38 2015 From: serge@REDACTED (Serge Aleynikov) Date: Wed, 9 Dec 2015 11:36:38 -0500 Subject: [erlang-questions] Idea for deprecating EPMD In-Reply-To: References: Message-ID: A good thing about the legacy epmd is that it's slim, and it can be supervised separately from Erlang VM (e.g. systemd). I am not so sure that the C language choice for the current epmd is bad. The implementation is simple, and does well, unless you are running in limitations of a specific OS (such as RTEMS) that require to have an embedded epmd. Running an epmd-like replacement in Erlang in a general case could be tricky when running multiple nodes on the same host as now the epmd would be dependent on a single node's availability, and actually the whole epmd protocol might need to be revised. Rather than replacing epmd, I would be much more interested in extending the epmd functionality to handle: 1. Auto-discovery of running local nodes in case of epmd restarts. 2. Support of multiple distribution transport protocols for one node (*) I would love to see some progress on #2, as it's a very big limitation that presently a node is limited to a single distributed transport. Regards, Serge (*) Previously I gave a shot at extending epmd and distribution to support SSL, TCP, Unix Domain Sockets, and other transports, but given the scope of impact that patch was rejected by the OTP team. https://github.com/erlang/otp/pull/121 http://erlang.org/pipermail/erlang-patches/2014-January/004522.html On Tue, Dec 8, 2015 at 6:57 PM, Geoff Cant wrote: > Hi all, I find EPMD to be a regular frustration when deploying and > operating Erlang systems. EPMD is a separate service that needs to be > running for Erlang distribution to work properly, and usually (in systems > that don?t use distribution for their main function) it's not set up right, > and you only notice in production because the only time you use for > distribution is to get a remote shell (over localhost). (Maybe I?m just bad > at doing this, but I do it a lot) > > Erlang node names already encode host information ? > ?descriptive_name@REDACTED?. If we include the erlang distribution listen > port too, that would remove the need for EPMD. For example: > ?descriptive_name@REDACTED:distribution_port?. Node names using this > scheme would skip the EPMD step, otherwise erlang distribution would fall > back to the current system. > > > My questions for the list are: > * Are you annoyed by epmd too? > * Do you think this idea is worth me writing up into an EEP or writing a > pull request? > * Do you think this idea is unworkable for some reason I?m overlooking? > > Thanks, > -Geoff > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From z@REDACTED Wed Dec 9 18:02:21 2015 From: z@REDACTED (Danil Zagoskin) Date: Wed, 9 Dec 2015 20:02:21 +0300 Subject: [erlang-questions] ssl_session_cache: trouble + questions In-Reply-To: References: Message-ID: Hi Lucas! I read the ssl 7.1 sources. As for current maint: Improved: - ssl_session:valid_session already uses the new time API: https://github.com/erlang/otp/blob/maint/lib/ssl/src/ssl_session.erl#L70 - Max cache size is by default 1000 now: https://github.com/erlang/otp/blob/maint/lib/ssl/src/ssl_manager.erl#L69 Left as before: - Cache table is still ordered_set: https://github.com/erlang/otp/blob/maint/lib/ssl/src/ssl_session_cache.erl#L36 - Multiple concurrent validators still possible: https://github.com/erlang/otp/blob/maint/lib/ssl/src/ssl_manager.erl#L385 Limiting session cache size and using monotonic_time as timestamp should prevent our kind of problems. Thank you! On Wed, Dec 9, 2015 at 3:39 PM, Lukas Larsson wrote: > Hello! > > You did not mention what version of ssl that you are using? If you do not > have the latest version, please have a look at the release notes in ssl and > see if any of the fixes in there applies to you. There are fixes made in > ssl that relate to a very large session cache. > > Lukas > > On Tue, Dec 8, 2015 at 1:15 PM, Danil Zagoskin wrote: > >> Hi! >> >> Recently our servers started to consume lots of SYS CPU. >> Inside a VM top processes (by reductions per second) were ssl session >> validators. >> Most popular current function in runnable processes was >> calendar:datetime_to_gregorian_seconds/2. >> Also gproc was very slow (it uses ETS). >> >> According to `ets:i().` the largest ETS table was: >> 49178 server_ssl_otp_session_cache ordered_set 5015080 >> 305919839 ssl_manager >> >> We have worked around the problem by using lower session_lifetime. >> >> But reading the code I came to these questions: >> - The cache is `ordered_set` type which has logarithmic access time. >> Does it have to be `ordered_set`, not just `set` (with constant access >> time)? >> - There is no protection agains running multiple validators. This leads >> to many processes accessing single table and doing the same work. This >> seems to greatly increase SYS CPU usage and slowdown in other ETS tables. >> Should we skip new validator start if previous one is still running? >> - ssl_session:valid_session is called for every session in cache and >> calls `calendar:datetime_to_gregorian_seconds({date(), time()})` itself. >> Should we use `erlang:monotonic_time(seconds)` everywhere instead? Or maybe >> we should pre-calculate minimal allowed timestamp to avoid extra >> arithmetics? >> >> >> -- >> Danil Zagoskin | z@REDACTED >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > -- Danil Zagoskin | z@REDACTED -------------- next part -------------- An HTML attachment was scrubbed... URL: From bog495@REDACTED Wed Dec 9 18:29:26 2015 From: bog495@REDACTED (Bogdan Andu) Date: Wed, 9 Dec 2015 19:29:26 +0200 Subject: [erlang-questions] UDP concurrent server In-Reply-To: <20151209161321.GK886@fhebert-ltm1> References: <20151209142755.GI886@fhebert-ltm1> <20151209161321.GK886@fhebert-ltm1> Message-ID: Update to my previous email: Doing some tests gave in the following conclusion: in Erlang/OTP 17 [erts-6.3] the leaks does not appear at all: entop output: Node: 'mps_dbg@REDACTED' (Disconnected) (17/6.3) unix (linux 4.2.6) CPU:2 SMP +A:10 Time: local time 19:21:51, up for 000:00:02:45, 0ms latency, Processes: total 53 (RQ 0) at 159238 RpI using 4516.0k (4541.4k allocated) Memory: Sys 8348.8k, Atom 190.9k/197.7k, Bin 134.3k, Code 4737.9k, Ets 285.7k It is Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:2:2] [async-threads:10] [kernel-poll:false] where the leaks happens. In 2 minutes I would have about 220 MB of binaries. So may be in OTP 18 is something changed that needs to be taken into account? More long running test must be done for better conclusions On Wed, Dec 9, 2015 at 6:13 PM, Fred Hebert wrote: > On 12/09, Bogdan Andu wrote: > >> init([Port, Ip]) -> >> process_flag(trap_exit, true), >> >> {ok, Sock} = gen_udp:open(Port, [binary, >> {active, false}, >> {reuseaddr, true}, >> {ip, Ip} >> ]), >> >> {ok, #udp_conn_state{sock = Sock}, 0}. >> handle_info({udp, Socket, Host, Port, Bin}, State) -> >> {noreply, State, 1}; >> >> handle_info(timeout, #udp_conn_state{sock = Sock} = State) -> >> inet:setopts(Sock, [{active, once}]), >> {noreply, State}; >> >> handle_info(Info, State) -> >> {noreply, State}. >> >> > Uh, interesting. So one thing I'd change early would be to go: > > handle_info({udp, Socket, Host, Port, Bin}, State=#udp_conn_state{sock > = Sock}) -> > inet:setopts(Sock, [{active,once}]), > {noreply, State}; > > (and do the same in `init/1') > > @ Fred: yes you are right, is cleaner and faster that way. But initially I wanted a separation of these. > This at least would let yo consume information much faster by avoiding > manual 1 ms sleeps everywhere. Even a `0' value may help. It doesn't > explain the leaking at all though. > > > What would explain it is that if you're not matching the message (as in > your original email), then you never set the socket to 'active' again, and > you never receive traffic. > > Not matching the message caused no leaks > If that's the problem, then you have been measuring a process that > receives traffic to a process that does not. It certainly would explain the > bad behaviour you've seen. > > If you want to try to see if a garbage collection would help, you can try > the 'recon' library and call 'recon:bin_leak(10)' and it will take a > snapshot, run a GC on all processes, then return you those that lost the > most memory. If yours is in there, then adding 'hibernate' calls from time > to time (say, every 10,000 packets) could help keep things clean. > > It sucks, but that might be what is needed if the shape of your data is > not amenable to clean GCs. If that doesn't solve it, then we get far funner > problems with memory allocation. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mononcqc@REDACTED Wed Dec 9 18:39:04 2015 From: mononcqc@REDACTED (Fred Hebert) Date: Wed, 9 Dec 2015 12:39:04 -0500 Subject: [erlang-questions] UDP concurrent server In-Reply-To: References: <20151209142755.GI886@fhebert-ltm1> <20151209161321.GK886@fhebert-ltm1> Message-ID: <20151209173902.GM886@fhebert-ltm1> On 12/09, Bogdan Andu wrote: >So may be in OTP 18 is something changed that needs to be taken into >account? > There was a bug in Erlang/OTP 18.0 where the shell process would gradually leak binary memory. 18.1 and later should be unaffected by it. From bog495@REDACTED Wed Dec 9 18:49:55 2015 From: bog495@REDACTED (Bogdan Andu) Date: Wed, 9 Dec 2015 19:49:55 +0200 Subject: [erlang-questions] UDP concurrent server In-Reply-To: <20151209173902.GM886@fhebert-ltm1> References: <20151209142755.GI886@fhebert-ltm1> <20151209161321.GK886@fhebert-ltm1> <20151209173902.GM886@fhebert-ltm1> Message-ID: well, if this is the case .. then, case closed. but to leak memory at this rate (100 MB/minute, in this particular context) must have been a really nasty bug. Thanks all you guys for help. /Bogdan On Wed, Dec 9, 2015 at 7:39 PM, Fred Hebert wrote: > On 12/09, Bogdan Andu wrote: > >> So may be in OTP 18 is something changed that needs to be taken into >> account? >> >> There was a bug in Erlang/OTP 18.0 where the shell process would > gradually leak binary memory. 18.1 and later should be unaffected by it. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjtruog@REDACTED Wed Dec 9 19:57:53 2015 From: mjtruog@REDACTED (Michael Truog) Date: Wed, 09 Dec 2015 10:57:53 -0800 Subject: [erlang-questions] Idea for deprecating EPMD In-Reply-To: References: Message-ID: <566879B1.6000806@gmail.com> On 12/08/2015 03:57 PM, Geoff Cant wrote: > Hi all, I find EPMD to be a regular frustration when deploying and operating Erlang systems. EPMD is a separate service that needs to be running for Erlang distribution to work properly, and usually (in systems that don?t use distribution for their main function) it's not set up right, and you only notice in production because the only time you use for distribution is to get a remote shell (over localhost). (Maybe I?m just bad at doing this, but I do it a lot) > > Erlang node names already encode host information ? ?descriptive_name@REDACTED?. If we include the erlang distribution listen port too, that would remove the need for EPMD. For example: ?descriptive_name@REDACTED:distribution_port?. Node names using this scheme would skip the EPMD step, otherwise erlang distribution would fall back to the current system. > > > My questions for the list are: > * Are you annoyed by epmd too? > * Do you think this idea is worth me writing up into an EEP or writing a pull request? > * Do you think this idea is unworkable for some reason I?m overlooking? The problem seems to be that epmd is a separate OS process. That is already being addressed in the pull request https://github.com/erlang/otp/pull/815 to make it an Erlang process. Not sure about the reason why epmd was a separate OS process originally (I assume this doesn't impact -heart and VM instance count stuff, so epmd shouldn't be providing extra fault-tolerance). From erlang@REDACTED Wed Dec 9 20:39:14 2015 From: erlang@REDACTED (Stefan Marr) Date: Wed, 9 Dec 2015 20:39:14 +0100 Subject: [erlang-questions] Call for Papers: Reflect'16, Workshop on Reflection and Runtime Meta-Programming Techniques Message-ID: Call for Papers: Reflect?16 =========================== Workshop on Reflection and Runtime Meta-Programming Techniques Co-located with Modularity 2016 March 14 or 15, 2016, M?laga, Spain Twitter @ReflectWorkshop http://2016.modularity.info/track/Reflect-2016-papers With modern mainstream languages embracing runtime reflection, for instance in JavaScript with proxies and Ruby with its culture of using meta-programming, the research on meta-architectures, reflective programming, and other meta-programming techniques have become relevant and timely once again. Over the last couple of years, these techniques saw a surge of interest benefiting from the JavaScript standardization process as well as performance improvements based on just-in-time compilation that increased their general acceptance. The Reflect?16 workshop aims to bring together people who do research on reflection and runtime meta-programming, as well as users of such techniques to e.g. build applications, language extensions, or software tools. We invite contributions to the workshop on a wide range of topics related to design, implementation, and application of reflective APIs and runtime meta-programming techniques, as well as empirical studies and typing for such systems and languages. We welcome technical papers as well as work-in-progress and position papers from the academic as well as industrial perspective. Position paper should take a perhaps controversial stance on a specific topic and argue the position well. Topics of Interest ------------------ The topics of interest for the workshop include, but are not limited to: - applications to middleware, frameworks, and DSLs - reflection and metaobject protocols to enable tooling - meta-level architectures and reflective middleware for modern runtime platforms (e.g. IoT, cyber-physical systems, cloud/grid computing, exa-scale systems, smart grids, mobile systems) - optimization techniques to minimize runtime overhead of reflection - use for application-level runtime optimization - new language constructs for reflection and meta-programming - security in reflective systems and capability-based designs - application of reflective techniques to achieve adaptability, separation of concerns, code reuse, etc. - empirical studies to the dynamic behavior of reflective programs - application to enable complex concurrent systems - typing of reflective programs Workshop Format and Submissions ------------------------------- This workshop welcomes the presentation of mature work as well as discussion of new ideas and emerging problems as part of a mini-conference format. Furthermore, we plan for more interactive brainstorming and demonstration sessions between the formal presentations to enable an active exchange of ideas. The workshop papers will be published in both the electronic proceedings of the Modularity conference and in the ACM Digital Library, if not requested otherwise by the authors. Papers are to be submitted using the ACM sigplanconf style at 9pt font size. See http://www.acm.org/publications/article-templates/proceedings-template.html. position and work-in-progress paper: max. 4 pages technical paper: max. 8 pages demos and posters: 1-page abstract For the submission, please use the EasyChair system: https://easychair.org/conferences/?conf=reflect16 Important Dates --------------- abstract submission: January 11, 2016 paper submission: January 15, 2016 notification: February 6, 2016 camera-ready: February 13, 2016 all deadlines: Anywhere on Earth (AoE), i.e., GMT/UTC?12:00 hour Workshop Organizers ------------------- Gilad Bracha, Google Shigeru Chiba, University of Tokyo Elisa Gonzalez Boix, Vrije Universiteit Brussel Stefan Marr, Johannes Kepler University Linz Program Committee ----------------- Daniele Bonetta, Oracle Labs, Austria Damien Cassou, University of Lille 1, France Siobhan Clarke, Trinity College Dublin, Ireland Stephane Ducasse, Inria, France Robert Hirschfeld, HPI, Germany Hridesh Rajan, Iowa State University, USA Romain Rouvoy, University Lille 1 and INRIA, France Eric Tanter, University of Chile, Chile Laurie Tratt, King?s College, UK Tom Van Cutsem, Bell Labs, Belgium Takuo Wantanabe, Tokyo Institute of Technology, Japan Tijs van der Storm, CWI, NL -- Stefan Marr Johannes Kepler Universit?t Linz http://stefan-marr.de/research/ From ok@REDACTED Thu Dec 10 00:41:14 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Thu, 10 Dec 2015 12:41:14 +1300 Subject: [erlang-questions] gb_trees and dict performance In-Reply-To: References: Message-ID: On 10/12/2015, at 2:00 am, Vladimir Ralev wrote: > Hi all, > > I am curious about the O-complexity of the operations in these data > structures in Erlang. It would seem to me that because of immutability > adding elements in those structures would often require a duplication > of the data structure and copying is an O(N) operation itself. Because of immutability, adding (or deleting) elements in those structures nearly always requires duplicating ***PART*** of the data structures. But since they are trees, ONLY part. The amount of new space allocated (in pretty much any functional language) is going to be roughly proportional to the length of the path that the call traverses. This means O(log N) time and O(log N space), even taking (local) rebalancing into account. > > 2. If you have a job interview and are asked what is the complexity of > dict:store() and gb_trees:enter() what would you say? Me, I'd hire the guy who said "I'd expect it to be logarithmic, but if it mattered I would make sure to check the documentation, and if it REALLY mattered I would do my own benchmarks." From adam@REDACTED Thu Dec 10 01:29:06 2015 From: adam@REDACTED (Adam Cammack) Date: Wed, 9 Dec 2015 18:29:06 -0600 Subject: [erlang-questions] IP search optimization In-Reply-To: <20151123193420.GA2919@serenity> References: <20151119170510.GA72389@staff.retn.net> <20151120201804.GD8312@serenity> <20151122165345.GA13944@staff.retn.net> <20151123193420.GA2919@serenity> Message-ID: <20151210002906.GB10390@serenity> On Mon, Nov 23, 2015 at 01:34:20PM -0600, Adam Cammack wrote: > It would be interesting to see how the two methods scale with IPv6 > addresses. I'll try to work up a benchmark later. It took me a little while, but I finally got a chance to benchmark them with IPv6 (I also added parallel testing, to see which algorithm scales better). But first, a correction: > Delta = (A bor B) bxor (A band B), This simplifies to A bxor B, I don't know what I was thinking. Time for 100,000 CIDR lookups in a 1,000,000 element table on each of 10 processes, matching the /0 rule with nested routes. ets:prev/2 (IPv4): 3.424219 ets:lookup/2 (IPv4): 2.446373 ets:prev/2 (IPv6): 4.067232 ets:lookup/2 (IPv6): 10.953924 https://gist.github.com/acammack/17118c377d8bcf7f98ab The IPv6 mock data was done hastily and I would happily accept improvements. Some notable results: The lookup/2 algorithm benefits a lot more from concurrency on my 8 core smp system. For the same total number of calls, lookup/2 runs in about a quarter of the time on 10 threads, but prev/2 runs in only half to a third. This is enough for lookup/2 to become a bit faster to match the /0 block on IPv4 (prev/2 remains faster on matches against a flat table). The lookup/2 algorithm takes about 4-5 times as long to match the /0 rule on IPv6 addresses vs IPv4. Since its runtime is proportional to the number of bits in an address that are not matched, it starts getting expensive on my box for IPv6 (~0.4 ?s for each bit shifted off in the search, single threaded). Fine for 32 bits, questionable for 128 if every microsecond counts. The above effect causes the lookup/2 algorithm to take ~50% longer to match a /64 IPv6 block as the prev/2 algorithm takes to match the /0 IPv6 block, even with the concurrency advantage. My advice remains the same: if possible, flatten the lookup table and use ets:prev/2 without worrying about traversing up the table. Otherwise, measure with your lookup table and a sampling of actual addresses (but I bet prev/2 will be faster for at least IPv6). -- Adam Cammack From ingela.andin@REDACTED Thu Dec 10 12:14:30 2015 From: ingela.andin@REDACTED (Ingela Andin) Date: Thu, 10 Dec 2015 12:14:30 +0100 Subject: [erlang-questions] ssl_session_cache: trouble + questions In-Reply-To: References: Message-ID: Hi! 2015-12-09 18:02 GMT+01:00 Danil Zagoskin : > Hi Lucas! > > I read the ssl 7.1 sources. > > As for current maint: > Improved: > - ssl_session:valid_session already uses the new time API: > https://github.com/erlang/otp/blob/maint/lib/ssl/src/ssl_session.erl#L70 > - Max cache size is by default 1000 now: > https://github.com/erlang/otp/blob/maint/lib/ssl/src/ssl_manager.erl#L69 > Left as before: > - Cache table is still ordered_set: > https://github.com/erlang/otp/blob/maint/lib/ssl/src/ssl_session_cache.erl#L36 > There is a reason for using orderd_set see: 35ffd19df295f5ff73f9968b65dc8ad957c943e5 - Multiple concurrent validators still possible: > https://github.com/erlang/otp/blob/maint/lib/ssl/src/ssl_manager.erl#L385 > > There will only be one validator instance started for the reason of hitting the threshold (max value), but you are correct that the poller validator assumes that the old one already should be gone. I will make a backlog item to make sure there is only one of those, in the meantime the other changes will hopefully makes this less of a problem. Regards Ingela Erlang/OTP Team - Ericsson AB > Limiting session cache size and using monotonic_time as timestamp should > prevent our kind of problems. > > Thank you! > > > On Wed, Dec 9, 2015 at 3:39 PM, Lukas Larsson wrote: > >> Hello! >> >> You did not mention what version of ssl that you are using? If you do not >> have the latest version, please have a look at the release notes in ssl and >> see if any of the fixes in there applies to you. There are fixes made in >> ssl that relate to a very large session cache. >> >> Lukas >> >> On Tue, Dec 8, 2015 at 1:15 PM, Danil Zagoskin wrote: >> >>> Hi! >>> >>> Recently our servers started to consume lots of SYS CPU. >>> Inside a VM top processes (by reductions per second) were ssl session >>> validators. >>> Most popular current function in runnable processes was >>> calendar:datetime_to_gregorian_seconds/2. >>> Also gproc was very slow (it uses ETS). >>> >>> According to `ets:i().` the largest ETS table was: >>> 49178 server_ssl_otp_session_cache ordered_set 5015080 >>> 305919839 ssl_manager >>> >>> We have worked around the problem by using lower session_lifetime. >>> >>> But reading the code I came to these questions: >>> - The cache is `ordered_set` type which has logarithmic access time. >>> Does it have to be `ordered_set`, not just `set` (with constant access >>> time)? >>> - There is no protection agains running multiple validators. This >>> leads to many processes accessing single table and doing the same work. >>> This seems to greatly increase SYS CPU usage and slowdown in other ETS >>> tables. Should we skip new validator start if previous one is still running? >>> - ssl_session:valid_session is called for every session in cache and >>> calls `calendar:datetime_to_gregorian_seconds({date(), time()})` itself. >>> Should we use `erlang:monotonic_time(seconds)` everywhere instead? Or maybe >>> we should pre-calculate minimal allowed timestamp to avoid extra >>> arithmetics? >>> >>> >>> -- >>> Danil Zagoskin | z@REDACTED >>> >>> _______________________________________________ >>> erlang-questions mailing list >>> erlang-questions@REDACTED >>> http://erlang.org/mailman/listinfo/erlang-questions >>> >>> >> > > > -- > Danil Zagoskin | z@REDACTED > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto@REDACTED Thu Dec 10 12:16:29 2015 From: roberto@REDACTED (Roberto Ostinelli) Date: Thu, 10 Dec 2015 12:16:29 +0100 Subject: [erlang-questions] Understanding global:set_lock/1,2,3 Message-ID: Dear list, I'm trying to get an understanding of what global:set_lock/1,2,3 exactly does. I read from the docs: Sets a lock on the specified nodes (or on all nodes if none are specified) on ResourceId for LockRequesterId. Let's say that I want to perform a series of operations on mnesia schemas and want to avoid all other nodes accessing mnesia tables while one node is busy at it. Is it enough to write: global:trans({{?MODULE, lock_mnesia_for_a_while}, self()}, fun() -> do_things_on_mnesia() end). I don't get how this could lock mnesia for the other nodes. Can some kind soul point me in the right direction? Thank you, r. -------------- next part -------------- An HTML attachment was scrubbed... URL: From z@REDACTED Thu Dec 10 12:36:53 2015 From: z@REDACTED (Danil Zagoskin) Date: Thu, 10 Dec 2015 14:36:53 +0300 Subject: [erlang-questions] ssl_session_cache: trouble + questions In-Reply-To: References: Message-ID: Hi! Thank you for the explanation about ordered_set! BTW, does anybody use custom cache modules? If not, old session cleanup could be done with ets:match_delete. It should be faster by not calling a function for each entry. On Thu, Dec 10, 2015 at 2:14 PM, Ingela Andin wrote: > Hi! > > 2015-12-09 18:02 GMT+01:00 Danil Zagoskin : > >> Hi Lucas! >> >> I read the ssl 7.1 sources. >> >> As for current maint: >> Improved: >> - ssl_session:valid_session already uses the new time API: >> https://github.com/erlang/otp/blob/maint/lib/ssl/src/ssl_session.erl#L70 >> - Max cache size is by default 1000 now: >> https://github.com/erlang/otp/blob/maint/lib/ssl/src/ssl_manager.erl#L69 >> Left as before: >> - Cache table is still ordered_set: >> https://github.com/erlang/otp/blob/maint/lib/ssl/src/ssl_session_cache.erl#L36 >> > > There is a reason for using orderd_set see: > 35ffd19df295f5ff73f9968b65dc8ad957c943e5 > > > - Multiple concurrent validators still possible: >> https://github.com/erlang/otp/blob/maint/lib/ssl/src/ssl_manager.erl#L385 >> >> > There will only be one validator instance started for the reason of > hitting the threshold (max value), but you are correct that the poller > validator assumes that the old one already should be gone. I will make a > backlog item to make sure there is only one of those, in the meantime the > other changes will hopefully makes this less of a problem. > > > Regards Ingela Erlang/OTP Team - Ericsson AB > > > >> Limiting session cache size and using monotonic_time as timestamp should >> prevent our kind of problems. >> >> Thank you! >> >> >> On Wed, Dec 9, 2015 at 3:39 PM, Lukas Larsson wrote: >> >>> Hello! >>> >>> You did not mention what version of ssl that you are using? If you do >>> not have the latest version, please have a look at the release notes in ssl >>> and see if any of the fixes in there applies to you. There are fixes made >>> in ssl that relate to a very large session cache. >>> >>> Lukas >>> >>> On Tue, Dec 8, 2015 at 1:15 PM, Danil Zagoskin wrote: >>> >>>> Hi! >>>> >>>> Recently our servers started to consume lots of SYS CPU. >>>> Inside a VM top processes (by reductions per second) were ssl session >>>> validators. >>>> Most popular current function in runnable processes was >>>> calendar:datetime_to_gregorian_seconds/2. >>>> Also gproc was very slow (it uses ETS). >>>> >>>> According to `ets:i().` the largest ETS table was: >>>> 49178 server_ssl_otp_session_cache ordered_set 5015080 >>>> 305919839 ssl_manager >>>> >>>> We have worked around the problem by using lower session_lifetime. >>>> >>>> But reading the code I came to these questions: >>>> - The cache is `ordered_set` type which has logarithmic access time. >>>> Does it have to be `ordered_set`, not just `set` (with constant access >>>> time)? >>>> - There is no protection agains running multiple validators. This >>>> leads to many processes accessing single table and doing the same work. >>>> This seems to greatly increase SYS CPU usage and slowdown in other ETS >>>> tables. Should we skip new validator start if previous one is still running? >>>> - ssl_session:valid_session is called for every session in cache and >>>> calls `calendar:datetime_to_gregorian_seconds({date(), time()})` itself. >>>> Should we use `erlang:monotonic_time(seconds)` everywhere instead? Or maybe >>>> we should pre-calculate minimal allowed timestamp to avoid extra >>>> arithmetics? >>>> >>>> >>>> -- >>>> Danil Zagoskin | z@REDACTED >>>> >>>> _______________________________________________ >>>> erlang-questions mailing list >>>> erlang-questions@REDACTED >>>> http://erlang.org/mailman/listinfo/erlang-questions >>>> >>>> >>> >> >> >> -- >> Danil Zagoskin | z@REDACTED >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > -- Danil Zagoskin | z@REDACTED -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmkolesnikov@REDACTED Thu Dec 10 12:37:45 2015 From: dmkolesnikov@REDACTED (Dmitry Kolesnikov) Date: Thu, 10 Dec 2015 13:37:45 +0200 Subject: [erlang-questions] Understanding global:set_lock/1,2,3 In-Reply-To: References: Message-ID: <630FE65C-04FC-4D13-8E31-03FF682C2BF1@gmail.com> Hello, As far I?ve understood the code, Global lock is a variant of Lamport clock implementation. http://amturing.acm.org/p558-lamport.pdf You can use for that purpose but I?ve never used it for my tasks because it requires message exchange with other cluster nodes. However, some part of OTP platform uses it. If you are aiming a cloud based deployment (e.g. AWS) then better to thing something else. Best Regards, Dmitry > On Dec 10, 2015, at 1:16 PM, Roberto Ostinelli wrote: > > Dear list, > I'm trying to get an understanding of what global:set_lock/1,2,3 exactly does. > > I read from the docs: > Sets a lock on the specified nodes (or on all nodes if none are specified) on ResourceId for LockRequesterId. > > Let's say that I want to perform a series of operations on mnesia schemas and want to avoid all other nodes accessing mnesia tables while one node is busy at it. > > Is it enough to write: > > global:trans({{?MODULE, lock_mnesia_for_a_while}, self()}, > fun() -> > do_things_on_mnesia() > end). > > I don't get how this could lock mnesia for the other nodes. > > Can some kind soul point me in the right direction? > > Thank you, > r. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From Tobias.Schlager@REDACTED Thu Dec 10 13:55:36 2015 From: Tobias.Schlager@REDACTED (Tobias Schlager) Date: Thu, 10 Dec 2015 12:55:36 +0000 Subject: [erlang-questions] Understanding global:set_lock/1,2,3 In-Reply-To: References: Message-ID: <12F2115FD1CCEE4294943B2608A18FA301A273E7C3@MAIL01.win.lbaum.eu> Hi Roberto, AFAIK for the user global locks work similar to other mutex/lock implementation, in your case {?MODULE, lock_mnesia_for_a_while} is your 'mutex'. Locking this mutex does not automagically lock mnesia or guard shared data structures. However, a second process trying to aquire/lock the mutex will be blocked until the current owner releases the lock. Please correct me if I'm wrong. Regards Tobias ________________________________ Von: erlang-questions-bounces@REDACTED [erlang-questions-bounces@REDACTED]" im Auftrag von "Roberto Ostinelli [roberto@REDACTED] Gesendet: Donnerstag, 10. Dezember 2015 12:16 An: Erlang Betreff: [erlang-questions] Understanding global:set_lock/1,2,3 Dear list, I'm trying to get an understanding of what global:set_lock/1,2,3 exactly does. I read from the docs: Sets a lock on the specified nodes (or on all nodes if none are specified) on ResourceId for LockRequesterId. Let's say that I want to perform a series of operations on mnesia schemas and want to avoid all other nodes accessing mnesia tables while one node is busy at it. Is it enough to write: global:trans({{?MODULE, lock_mnesia_for_a_while}, self()}, fun() -> do_things_on_mnesia() end). I don't get how this could lock mnesia for the other nodes. Can some kind soul point me in the right direction? Thank you, r. -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto@REDACTED Thu Dec 10 17:10:23 2015 From: roberto@REDACTED (Roberto Ostinelli) Date: Thu, 10 Dec 2015 17:10:23 +0100 Subject: [erlang-questions] Know if running in CT Message-ID: Dear list, Is there a simple way to know if code is running in CT? Ideally, I would like to define a conditional macro depending on code running in tests or not. Any ideas welcome! Thanks, r. -------------- next part -------------- An HTML attachment was scrubbed... URL: From silviu.cpp@REDACTED Thu Dec 10 17:13:59 2015 From: silviu.cpp@REDACTED (Caragea Silviu) Date: Thu, 10 Dec 2015 18:13:59 +0200 Subject: [erlang-questions] observer app in production Message-ID: Hello, It's ok to use observer in production environment to look at processes queues, memory usage and whatever else it's there? There any any performance issues for the node if you use this ? Currently we are using bigwig but observer seems more robust feature wise. Also there are any other solutions for thins kind of job ? Silviu -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Thu Dec 10 17:18:44 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Thu, 10 Dec 2015 17:18:44 +0100 Subject: [erlang-questions] Know if running in CT In-Reply-To: References: Message-ID: On Thu, Dec 10, 2015 at 5:10 PM, Roberto Ostinelli wrote: > Ideally, I would like to define a conditional macro depending on code > running in tests or not. > > You can probably have a configuration option via application:get_env/2 you can use to make the discrimination. On the top of my head, this is what I would do. > Any ideas welcome! > Advice: don't do it. Keep your test artifact equivalent to the deployment artifact. Make the application such that normal configuration parameters can be set in order to make the application work in the test environment. It is safer in the long run because you don't want too many diverging code paths based on the environment it lives in. It is simply too brittle, and that would be my advice. -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From felixgallo@REDACTED Thu Dec 10 17:19:33 2015 From: felixgallo@REDACTED (Felix Gallo) Date: Thu, 10 Dec 2015 08:19:33 -0800 Subject: [erlang-questions] observer app in production In-Reply-To: References: Message-ID: You might also look at erlyberly (https://github.com/andytill/erlyberly), which is a fairly new debugging/observation tool that's intended to complement observer. F. On Thu, Dec 10, 2015 at 8:13 AM, Caragea Silviu wrote: > Hello, > > It's ok to use observer in production environment to look at processes > queues, memory usage and whatever else it's there? > > There any any performance issues for the node if you use this ? Currently > we are using bigwig but observer seems more robust feature wise. > > Also there are any other solutions for thins kind of job ? > > Silviu > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangud@REDACTED Thu Dec 10 17:54:17 2015 From: dangud@REDACTED (Dan Gudmundsson) Date: Thu, 10 Dec 2015 16:54:17 +0000 Subject: [erlang-questions] observer app in production In-Reply-To: References: Message-ID: As long as you don't use the gui on the production node, it shall be ok. Performance depends of how often you sample the system, the process (top) window costs more if you have many processes and so on. On Thu, Dec 10, 2015 at 5:19 PM Felix Gallo wrote: > You might also look at erlyberly (https://github.com/andytill/erlyberly), > which is a fairly new debugging/observation tool that's intended to > complement observer. > > F. > > On Thu, Dec 10, 2015 at 8:13 AM, Caragea Silviu > wrote: > >> Hello, >> >> It's ok to use observer in production environment to look at processes >> queues, memory usage and whatever else it's there? >> >> There any any performance issues for the node if you use this ? Currently >> we are using bigwig but observer seems more robust feature wise. >> >> Also there are any other solutions for thins kind of job ? >> >> Silviu >> >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto@REDACTED Thu Dec 10 18:52:48 2015 From: roberto@REDACTED (Roberto Ostinelli) Date: Thu, 10 Dec 2015 18:52:48 +0100 Subject: [erlang-questions] Know if running in CT In-Reply-To: References: Message-ID: Thank you Jesper, I *very* rarely do it. However, I'm in need to slow down a node to allow for distributed tests to work. BTW this is exactly what I'm doing right now. Best, r. On Thu, Dec 10, 2015 at 5:18 PM, Jesper Louis Andersen < jesper.louis.andersen@REDACTED> wrote: > > On Thu, Dec 10, 2015 at 5:10 PM, Roberto Ostinelli > wrote: > >> Ideally, I would like to define a conditional macro depending on code >> running in tests or not. >> >> > You can probably have a configuration option via application:get_env/2 you > can use to make the discrimination. On the top of my head, this is what I > would do. > > >> Any ideas welcome! >> > > Advice: don't do it. Keep your test artifact equivalent to the deployment > artifact. Make the application such that normal configuration parameters > can be set in order to make the application work in the test environment. > It is safer in the long run because you don't want too many diverging code > paths based on the environment it lives in. It is simply too brittle, and > that would be my advice. > > > -- > J. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roger@REDACTED Thu Dec 10 18:57:55 2015 From: roger@REDACTED (Roger Lipscombe) Date: Thu, 10 Dec 2015 17:57:55 +0000 Subject: [erlang-questions] Know if running in CT In-Reply-To: References: Message-ID: On 10 December 2015 at 17:52, Roberto Ostinelli wrote: > Thank you Jesper, > I *very* rarely do it. However, I'm in need to slow down a node to allow for > distributed tests to work. IntervalMs = application:get_env(?APPLICATION, furtle_interval_ms, 10). Because you never know when you'll need to change the furtling interval in production. But, if you're needing to slow things down to allow distributed tests to work, you've got a race condition, and sleeps (effectively) are a band-aid. You should be waiting for (whatever) to finish before moving on. We do this, mostly, by repeated polling and, occasionally, by waiting for events. Unless you're deliberately inducing race conditions and _that's_ why you need the slowness, of course... From roberto.ostinelli@REDACTED Thu Dec 10 19:16:48 2015 From: roberto.ostinelli@REDACTED (Roberto Ostinelli) Date: Thu, 10 Dec 2015 19:16:48 +0100 Subject: [erlang-questions] Know if running in CT In-Reply-To: References: Message-ID: <496984C9-8730-4586-A6D2-16FC4A6B264B@widetag.com> I need to ensure that a disconnected ct slave node waits until it is connected to the main ct node to send a message back to the process running the test. Otherwise this message gets lost and the test does not pass. No need to get too much in the details here, but believe me I've tried all possible options. Thank you for your inputs :) > On 10 dic 2015, at 18:57, Roger Lipscombe wrote: > >> On 10 December 2015 at 17:52, Roberto Ostinelli wrote: >> Thank you Jesper, >> I *very* rarely do it. However, I'm in need to slow down a node to allow for >> distributed tests to work. > > IntervalMs = application:get_env(?APPLICATION, furtle_interval_ms, 10). > > Because you never know when you'll need to change the furtling > interval in production. > > But, if you're needing to slow things down to allow distributed tests > to work, you've got a race condition, and sleeps (effectively) are a > band-aid. You should be waiting for (whatever) to finish before moving > on. We do this, mostly, by repeated polling and, occasionally, by > waiting for events. > > Unless you're deliberately inducing race conditions and _that's_ why > you need the slowness, of course... From silviu.cpp@REDACTED Thu Dec 10 20:01:23 2015 From: silviu.cpp@REDACTED (Caragea Silviu) Date: Thu, 10 Dec 2015 21:01:23 +0200 Subject: [erlang-questions] observer app in production In-Reply-To: References: Message-ID: Hello Dan, The GUI is running on other machines We are doing ssh and forward the necessary ports and run observer app from a hidden node remotely as described here: https://gist.github.com/pnc/9e957e17d4f9c6c81294 Silviu On Thu, Dec 10, 2015 at 6:54 PM, Dan Gudmundsson wrote: > As long as you don't use the gui on the production node, it shall be ok. > Performance depends of how often you sample the system, the process (top) > window costs more if you have many processes and so on. > > > On Thu, Dec 10, 2015 at 5:19 PM Felix Gallo wrote: > >> You might also look at erlyberly (https://github.com/andytill/erlyberly), >> which is a fairly new debugging/observation tool that's intended to >> complement observer. >> >> F. >> >> On Thu, Dec 10, 2015 at 8:13 AM, Caragea Silviu >> wrote: >> >>> Hello, >>> >>> It's ok to use observer in production environment to look at processes >>> queues, memory usage and whatever else it's there? >>> >>> There any any performance issues for the node if you use this ? >>> Currently we are using bigwig but observer seems more robust feature wise. >>> >>> Also there are any other solutions for thins kind of job ? >>> >>> Silviu >>> >>> >>> >>> _______________________________________________ >>> erlang-questions mailing list >>> erlang-questions@REDACTED >>> http://erlang.org/mailman/listinfo/erlang-questions >>> >>> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From felixgallo@REDACTED Thu Dec 10 22:51:35 2015 From: felixgallo@REDACTED (Felix Gallo) Date: Thu, 10 Dec 2015 13:51:35 -0800 Subject: [erlang-questions] dialyzer apparently-erroneously thinking a spec is underspecified Message-ID: consider: https://gist.github.com/anonymous/e5165c443f6bdf0f0780 by inspection, the spec on the boop function is "wrong", but this code passes dialyzer, unless you run it with -Wunderspecs or -Wspecdiffs, at which point it complains: tt.erl:10: The specification for tt:boop/1 states that the function might also return 'glory' but the inferred return is 'hello' | 'micronauts' So dialyzer does notice that 'micronauts' might be returned, and does know that the glory() type does not include that as a choice, but believes that the spec is just underspecified, rather than violated. Why is that? F. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aronisstav@REDACTED Thu Dec 10 23:23:37 2015 From: aronisstav@REDACTED (Stavros Aronis) Date: Thu, 10 Dec 2015 22:23:37 +0000 Subject: [erlang-questions] dialyzer apparently-erroneously thinking a spec is underspecified In-Reply-To: References: Message-ID: Hi Felix! The type system that Dialyzer is based on (success types) allows for the return value of a function to be over-approximated (i.e. include more values). An unfortunate side effect of that characteristic is that, in general, Dialyzer cannot be sure whether a particular value can really be returned from a function or not neither can it discern whether a particular value is an overapproximation or not. Assume e.g. that instead of random:uniform/0 you had a call to another function foo:bar(X) (using boop's argument) with the following code: -module(foo). -export([bar/1]). bar(X) when is_atom(X) -> 1; bar(_) -> 0. With no spec provided, the inferred type for foo:bar/1 one will be "any() -> 0 | 1". Notice however that for every call from your module (with an argument of type glory/0, or specifically 'hello') the result will be 1, thus safe. Unfortunately Dialyzer cannot be sure about this and the inferred type of boop/1 will be "hello -> hello | micronauts", an overapproximation of the accurate "hello -> hello". Now in either example, since there can be some overlap, Dialyzer makes the 'safe' assumption that any extra values in the inferred type are just due to overapproximation and stays silent. It only complains about the impossible *extra* value in the spec (glory) which can never be returned. Hope this helps, Stavros On Thu, Dec 10, 2015 at 10:52 PM Felix Gallo wrote: > consider: > > https://gist.github.com/anonymous/e5165c443f6bdf0f0780 > > by inspection, the spec on the boop function is "wrong", but this code > passes dialyzer, unless you run it with -Wunderspecs or -Wspecdiffs, at > which point it complains: > > tt.erl:10: The specification for tt:boop/1 states that the function might > also return 'glory' but the inferred return is 'hello' | 'micronauts' > > So dialyzer does notice that 'micronauts' might be returned, and does know > that the glory() type does not include that as a choice, but believes that > the spec is just underspecified, rather than violated. Why is that? > > F. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From raould@REDACTED Thu Dec 10 23:27:59 2015 From: raould@REDACTED (Raoul Duke) Date: Thu, 10 Dec 2015 14:27:59 -0800 Subject: [erlang-questions] pluggable type systems? (Re: dialyzer apparently-erroneously thinking a spec is underspecified) Message-ID: > The type system that Dialyzer is based on (success types) Has anybody ever mucked with other type systems for Erlang/Dialyzer? In some fantasy parallel universe that is full of rainbows and stuff, it would be cool to be able to plug them into some single tool, so one could try different / multiple static checks. From felixgallo@REDACTED Fri Dec 11 00:16:48 2015 From: felixgallo@REDACTED (Felix Gallo) Date: Thu, 10 Dec 2015 15:16:48 -0800 Subject: [erlang-questions] dialyzer apparently-erroneously thinking a spec is underspecified In-Reply-To: References: Message-ID: Stavros, thanks for the reply! So I do understanding that Dialyzer gets to boop/1 :: 'hello' | 'micronauts', and that it then cannot decide if it can really ever truly return both 'hello' and 'micronauts'. So far, so good. But then, I would expect the glory() spec to act as a second-pass constraint after the type for boop/1 has been inferred. My expectation is derived entirely from the ancient oral tradition folkways of my primitive people, rather than a formal type theoretical foundation, but I would expect the proposition 'hello' to be tested against {'glory' | 'hello'} and pass; and then for the proposition 'micronauts' to be tested against {'glory' | 'hello'} and fail, and for a diagnostic to be emitted. And indeed, if I change boop/1 to be boop(X) -> derp. Then dialyzer rightfully corrects me with a primly emitted diagnostic: tt.erl:9: Invalid type specification for function tt:boop/1. The success typing is ('hello') -> 'derp' and dually if I change boop/1 to be boop(X) -> hello. Then dialyzer rightfully approves and emits no diagnostic. It is only in the case where the function's return is inferred to be either-correctly-typed-or-not-correctly-typed where my expectation seems to be failing. So I gather my inference that -spec operates as a constraint on the function's type is subtly invalid? F. On Thu, Dec 10, 2015 at 2:23 PM, Stavros Aronis wrote: > Hi Felix! > > The type system that Dialyzer is based on (success types) allows for the > return value of a function to be over-approximated (i.e. include more > values). An unfortunate side effect of that characteristic is that, in > general, Dialyzer cannot be sure whether a particular value can really be > returned from a function or not neither can it discern whether a particular > value is an overapproximation or not. > > Assume e.g. that instead of random:uniform/0 you had a call to another > function foo:bar(X) (using boop's argument) with the following code: > > -module(foo). > -export([bar/1]). > bar(X) when is_atom(X) -> 1; > bar(_) -> 0. > > With no spec provided, the inferred type for foo:bar/1 one will be "any() > -> 0 | 1". Notice however that for every call from your module (with an > argument of type glory/0, or specifically 'hello') the result will be 1, > thus safe. Unfortunately Dialyzer cannot be sure about this and the > inferred type of boop/1 will be "hello -> hello | micronauts", an > overapproximation of the accurate "hello -> hello". > > Now in either example, since there can be some overlap, Dialyzer makes the > 'safe' assumption that any extra values in the inferred type are just due > to overapproximation and stays silent. It only complains about the > impossible *extra* value in the spec (glory) which can never be returned. > > Hope this helps, > > Stavros > > > On Thu, Dec 10, 2015 at 10:52 PM Felix Gallo wrote: > >> consider: >> >> https://gist.github.com/anonymous/e5165c443f6bdf0f0780 >> >> by inspection, the spec on the boop function is "wrong", but this code >> passes dialyzer, unless you run it with -Wunderspecs or -Wspecdiffs, at >> which point it complains: >> >> tt.erl:10: The specification for tt:boop/1 states that the function might >> also return 'glory' but the inferred return is 'hello' | 'micronauts' >> >> So dialyzer does notice that 'micronauts' might be returned, and does >> know that the glory() type does not include that as a choice, but believes >> that the spec is just underspecified, rather than violated. Why is that? >> >> F. >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aronisstav@REDACTED Fri Dec 11 00:46:01 2015 From: aronisstav@REDACTED (Stavros Aronis) Date: Thu, 10 Dec 2015 23:46:01 +0000 Subject: [erlang-questions] dialyzer apparently-erroneously thinking a spec is underspecified In-Reply-To: References: Message-ID: Felix, you are right at inferring that specs are additional constraints, but *only* in a function's calls: the return value of any call to boop will be assumed to be something that is both in the inferred *and* the specified type (here 'hello'); 'micronauts' will never be an expected return value (try to pattern match it against 'micronauts' and see what happens). Just to clarify the examples you mention: If boop is inferred to be returning 'derp', then the spec is certainly wrong, so Dialyzer complains. If boop is inferred to be returning 'hello', then the spec is certainly right, but underspecified, so Dialyzer stays silent unless -Wunderspecs is used. If boop is inferred to be returning 'hello' OR 'micronauts', then there is the possibility that the spec is right (still underspecified; glory is impossible) but Dialyzer overapproximated something, resulting in the inclusion of an additional value which is impossible (see example in the previous message). Dialyzer's rule #1 is 'only warn if something is definitely wrong' (also known as 'Dialyzer is never wrong'), so in that last case no warnings are emitted. It is unfortunate that Dialyzer doesn't believe more in its own deductive powers (in the original example 'micronauts' is certainly possible and not an overapproximation) but then again rule #1's corollary is 'Dialyzer never promised to find all discrepancies'...! Stavros On Fri, Dec 11, 2015 at 12:17 AM Felix Gallo wrote: > Stavros, thanks for the reply! > > So I do understanding that Dialyzer gets to boop/1 :: 'hello' | > 'micronauts', and that it then cannot decide if it can really ever truly > return both 'hello' and 'micronauts'. So far, so good. > > But then, I would expect the glory() spec to act as a second-pass > constraint after the type for boop/1 has been inferred. My expectation is > derived entirely from the ancient oral tradition folkways of my primitive > people, rather than a formal type theoretical foundation, but I would > expect the proposition 'hello' to be tested against {'glory' | 'hello'} and > pass; and then for the proposition 'micronauts' to be tested against > {'glory' | 'hello'} and fail, and for a diagnostic to be emitted. > > And indeed, if I change boop/1 to be > > boop(X) -> > derp. > > Then dialyzer rightfully corrects me with a primly emitted diagnostic: > > tt.erl:9: Invalid type specification for function tt:boop/1. The success > typing is ('hello') -> 'derp' > > and dually if I change boop/1 to be > > boop(X) -> > hello. > > Then dialyzer rightfully approves and emits no diagnostic. > > It is only in the case where the function's return is inferred to be > either-correctly-typed-or-not-correctly-typed where my expectation seems to > be failing. > > So I gather my inference that -spec operates as a constraint on the > function's type is subtly invalid? > > F. > > > On Thu, Dec 10, 2015 at 2:23 PM, Stavros Aronis > wrote: > >> Hi Felix! >> >> The type system that Dialyzer is based on (success types) allows for the >> return value of a function to be over-approximated (i.e. include more >> values). An unfortunate side effect of that characteristic is that, in >> general, Dialyzer cannot be sure whether a particular value can really be >> returned from a function or not neither can it discern whether a particular >> value is an overapproximation or not. >> >> Assume e.g. that instead of random:uniform/0 you had a call to another >> function foo:bar(X) (using boop's argument) with the following code: >> >> -module(foo). >> -export([bar/1]). >> bar(X) when is_atom(X) -> 1; >> bar(_) -> 0. >> >> With no spec provided, the inferred type for foo:bar/1 one will be "any() >> -> 0 | 1". Notice however that for every call from your module (with an >> argument of type glory/0, or specifically 'hello') the result will be 1, >> thus safe. Unfortunately Dialyzer cannot be sure about this and the >> inferred type of boop/1 will be "hello -> hello | micronauts", an >> overapproximation of the accurate "hello -> hello". >> >> Now in either example, since there can be some overlap, Dialyzer makes >> the 'safe' assumption that any extra values in the inferred type are just >> due to overapproximation and stays silent. It only complains about the >> impossible *extra* value in the spec (glory) which can never be returned. >> >> Hope this helps, >> >> Stavros >> >> >> On Thu, Dec 10, 2015 at 10:52 PM Felix Gallo >> wrote: >> >>> consider: >>> >>> https://gist.github.com/anonymous/e5165c443f6bdf0f0780 >>> >>> by inspection, the spec on the boop function is "wrong", but this code >>> passes dialyzer, unless you run it with -Wunderspecs or -Wspecdiffs, at >>> which point it complains: >>> >>> tt.erl:10: The specification for tt:boop/1 states that the function >>> might also return 'glory' but the inferred return is 'hello' | 'micronauts' >>> >>> So dialyzer does notice that 'micronauts' might be returned, and does >>> know that the glory() type does not include that as a choice, but believes >>> that the spec is just underspecified, rather than violated. Why is that? >>> >>> F. >>> _______________________________________________ >>> erlang-questions mailing list >>> erlang-questions@REDACTED >>> http://erlang.org/mailman/listinfo/erlang-questions >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From felixgallo@REDACTED Fri Dec 11 01:10:21 2015 From: felixgallo@REDACTED (Felix Gallo) Date: Thu, 10 Dec 2015 16:10:21 -0800 Subject: [erlang-questions] dialyzer apparently-erroneously thinking a spec is underspecified In-Reply-To: References: Message-ID: Stavros -- thanks, that gave me the proper intuition. I had been wanting to defend a library against erroneous calls at library authoring time, as one is trained to do with other type systems, but I now see that the library user is going to have to dialyze the whole system in order to reap those benefits. Probably a small price to pay. Appreciate the elucidation as always. F. On Thu, Dec 10, 2015 at 3:46 PM, Stavros Aronis wrote: > Felix, you are right at inferring that specs are additional constraints, > but *only* in a function's calls: the return value of any call to boop will > be assumed to be something that is both in the inferred *and* the specified > type (here 'hello'); 'micronauts' will never be an expected return value > (try to pattern match it against 'micronauts' and see what happens). > > Just to clarify the examples you mention: > > If boop is inferred to be returning 'derp', then the spec is certainly > wrong, so Dialyzer complains. > > If boop is inferred to be returning 'hello', then the spec is certainly > right, but underspecified, so Dialyzer stays silent unless -Wunderspecs is > used. > > If boop is inferred to be returning 'hello' OR 'micronauts', then there is > the possibility that the spec is right (still underspecified; glory is > impossible) but Dialyzer overapproximated something, resulting in the > inclusion of an additional value which is impossible (see example in the > previous message). > > Dialyzer's rule #1 is 'only warn if something is definitely wrong' (also > known as 'Dialyzer is never wrong'), so in that last case no warnings are > emitted. It is unfortunate that Dialyzer doesn't believe more in its own > deductive powers (in the original example 'micronauts' is certainly > possible and not an overapproximation) but then again rule #1's corollary is > 'Dialyzer never promised to find all discrepancies'...! > > Stavros > > On Fri, Dec 11, 2015 at 12:17 AM Felix Gallo wrote: > >> Stavros, thanks for the reply! >> >> So I do understanding that Dialyzer gets to boop/1 :: 'hello' | >> 'micronauts', and that it then cannot decide if it can really ever truly >> return both 'hello' and 'micronauts'. So far, so good. >> >> But then, I would expect the glory() spec to act as a second-pass >> constraint after the type for boop/1 has been inferred. My expectation is >> derived entirely from the ancient oral tradition folkways of my primitive >> people, rather than a formal type theoretical foundation, but I would >> expect the proposition 'hello' to be tested against {'glory' | 'hello'} and >> pass; and then for the proposition 'micronauts' to be tested against >> {'glory' | 'hello'} and fail, and for a diagnostic to be emitted. >> >> And indeed, if I change boop/1 to be >> >> boop(X) -> >> derp. >> >> Then dialyzer rightfully corrects me with a primly emitted diagnostic: >> >> tt.erl:9: Invalid type specification for function tt:boop/1. The success >> typing is ('hello') -> 'derp' >> >> and dually if I change boop/1 to be >> >> boop(X) -> >> hello. >> >> Then dialyzer rightfully approves and emits no diagnostic. >> >> It is only in the case where the function's return is inferred to be >> either-correctly-typed-or-not-correctly-typed where my expectation seems to >> be failing. >> >> So I gather my inference that -spec operates as a constraint on the >> function's type is subtly invalid? >> >> F. >> >> >> On Thu, Dec 10, 2015 at 2:23 PM, Stavros Aronis >> wrote: >> >>> Hi Felix! >>> >>> The type system that Dialyzer is based on (success types) allows for the >>> return value of a function to be over-approximated (i.e. include more >>> values). An unfortunate side effect of that characteristic is that, in >>> general, Dialyzer cannot be sure whether a particular value can really be >>> returned from a function or not neither can it discern whether a particular >>> value is an overapproximation or not. >>> >>> Assume e.g. that instead of random:uniform/0 you had a call to another >>> function foo:bar(X) (using boop's argument) with the following code: >>> >>> -module(foo). >>> -export([bar/1]). >>> bar(X) when is_atom(X) -> 1; >>> bar(_) -> 0. >>> >>> With no spec provided, the inferred type for foo:bar/1 one will be >>> "any() -> 0 | 1". Notice however that for every call from your module (with >>> an argument of type glory/0, or specifically 'hello') the result will be 1, >>> thus safe. Unfortunately Dialyzer cannot be sure about this and the >>> inferred type of boop/1 will be "hello -> hello | micronauts", an >>> overapproximation of the accurate "hello -> hello". >>> >>> Now in either example, since there can be some overlap, Dialyzer makes >>> the 'safe' assumption that any extra values in the inferred type are just >>> due to overapproximation and stays silent. It only complains about the >>> impossible *extra* value in the spec (glory) which can never be returned. >>> >>> Hope this helps, >>> >>> Stavros >>> >>> >>> On Thu, Dec 10, 2015 at 10:52 PM Felix Gallo >>> wrote: >>> >>>> consider: >>>> >>>> https://gist.github.com/anonymous/e5165c443f6bdf0f0780 >>>> >>>> by inspection, the spec on the boop function is "wrong", but this code >>>> passes dialyzer, unless you run it with -Wunderspecs or -Wspecdiffs, at >>>> which point it complains: >>>> >>>> tt.erl:10: The specification for tt:boop/1 states that the function >>>> might also return 'glory' but the inferred return is 'hello' | 'micronauts' >>>> >>>> So dialyzer does notice that 'micronauts' might be returned, and does >>>> know that the glory() type does not include that as a choice, but believes >>>> that the spec is just underspecified, rather than violated. Why is that? >>>> >>>> F. >>>> _______________________________________________ >>>> erlang-questions mailing list >>>> erlang-questions@REDACTED >>>> http://erlang.org/mailman/listinfo/erlang-questions >>>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From vances@REDACTED Fri Dec 11 04:10:27 2015 From: vances@REDACTED (Vance Shipley) Date: Fri, 11 Dec 2015 08:40:27 +0530 Subject: [erlang-questions] observer app in production In-Reply-To: References: Message-ID: On Thu, Dec 10, 2015 at 9:43 PM, Caragea Silviu wrote: > It's ok to use observer in production environment to look at processes queues, memory usage and whatever else it's there? That would depend on how critical the system is and how loaded it is at the time. > There any any performance issues for the node if you use this ? There is additional load added which is larger the more processes are running, spawning & exiting. The "observer effect" can become a problem when the system is highly loaded. I would recommend frequent/constant monitoring of production systems to learn how they behave in the wild as long as they aren't critical and highly loaded. It is even more important to observe highly loaded systems as that is when their true behaviour reveals itself however it is also when they are least stable so if they are also critical you should be more concerned about keeping them up. -- -Vance From kenji@REDACTED Fri Dec 11 04:19:23 2015 From: kenji@REDACTED (Kenji Rikitake) Date: Fri, 11 Dec 2015 12:19:23 +0900 Subject: [erlang-questions] Idea for deprecating EPMD In-Reply-To: References: Message-ID: <20151211031923.GA69433@k2r.org> > My questions for the list are: > * Are you annoyed by epmd too? Yes. The age of sunrpc should have been ended 10 years ago (though I know it's really a hard task as Craig writes). And it's not firewall or filtering friendly at all. > * Do you think this idea is worth me writing up into an EEP or writing a pull request? Yes, highly appreciated. > * Do you think this idea is unworkable for some reason I?m overlooking? Sergey Aleynikov made a very good point on this, especially on supporting multiple transport protocols. At least tcp4 and tcp6 should be simultaneously usable. Regards, Kenji Rikitake ++> Geoff Cant [2015-12-08 15:57:06 -0800]: > Date: Tue, 8 Dec 2015 15:57:06 -0800 > From: Geoff Cant > To: Erlang Questions > Subject: [erlang-questions] Idea for deprecating EPMD > > Hi all, I find EPMD to be a regular frustration when deploying and operating Erlang systems. EPMD is a separate service that needs to be running for Erlang distribution to work properly, and usually (in systems that don?t use distribution for their main function) it's not set up right, and you only notice in production because the only time you use for distribution is to get a remote shell (over localhost). (Maybe I?m just bad at doing this, but I do it a lot) > > Erlang node names already encode host information ? ?descriptive_name@REDACTED?. If we include the erlang distribution listen port too, that would remove the need for EPMD. For example: ?descriptive_name@REDACTED:distribution_port?. Node names using this scheme would skip the EPMD step, otherwise erlang distribution would fall back to the current system. > > > > Thanks, > -Geoff > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From jim.rosenblum@REDACTED Fri Dec 11 04:30:08 2015 From: jim.rosenblum@REDACTED (jim rosenblum) Date: Thu, 10 Dec 2015 22:30:08 -0500 Subject: [erlang-questions] Understanding global:set_lock/1,2,3 Message-ID: Roberto, I use global for a very similar use case. I have a distributed, in-memory cache that uses Mnesia. The first node of the cluster creates the schema, subsequent nodes add themselves to the alrady created cluster. Thus I have need to make sure that the test for being the first node and the cluster creation is protected. I use global:trans/4. The first parameter is an id = {resource, PID of lock requestor} The second is a function which is executed should the lock be acquired The third parameter indicates which nodes the lock should range over The fourth indicates how long to wait for the lock. My code pings all configured node names to try join the mesh of participating nodes, then it synchronizes with the global name server across all nodes in the cluster, then locks the 'resource' jc_mnesia' and, assuming the lock is acquired, executes the function. Any other node that tries to execute this same code will block until the first node is out of the function. Feel free to look at my code (https://github.com/jr0senblum/jc) or email me with any questions. I *think* this works... it's been in production for a while... so... there's that.... ... snip... [Node || Node <- Nodes, pong == net_adm:ping(Node) ], global:sync(), global:trans({jc_mnesia, self()}, fun() -> mnesia:start(), case [Node || Node <- nodes(), jc_loaded(Node)] of [] -> % No nodes have jc service, this one is first mnesia:create_schema([]), dynamic_db_init([]), Indexes = application:get_env(jc, indexes, []), [jc_store:start_indexing(Map, Path) || {Map, Path} <- Indexes]; Ns -> % Not the first node up, join existing cluster dynamic_db_init(Ns) end, true = global:del_lock({jc_mnesia, self()}), ok end, [node() | nodes()], infinity). Dear list, I'm trying to get an understanding of what global:set_lock/1,2,3 exactly does. I read from the docs: Sets a lock on the specified nodes (or on all nodes if none are specified) on ResourceId for LockRequesterId. Let's say that I want to perform a series of operations on mnesia schemas and want to avoid all other nodes accessing mnesia tables while one node is busy at it. Is it enough to write: global:trans({{?MODULE, lock_mnesia_for_a_while}, self()}, fun() -> do_things_on_mnesia() end). I don't get how this could lock mnesia for the other nodes. Can some kind soul point me in the right direction? Thank you, r. Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From gleber.p@REDACTED Fri Dec 11 11:00:34 2015 From: gleber.p@REDACTED (Gleb Peregud) Date: Fri, 11 Dec 2015 11:00:34 +0100 Subject: [erlang-questions] Erlang package manager In-Reply-To: References: <54980224.8070106@ninenines.eu> <1419280721.3759353.205867005.30015C38@webmail.messagingengine.com> <1420228976.383269.208922205.38DDC0CC@webmail.messagingengine.com> Message-ID: I am currently working on a proof of concept for Nix packages in Erlang. Here are the assumptions of mine: - maintenance of Nix packages should be automated - tarballs with packages should come from a single source - be willing to sacrifice backward compatibility to be able to do automatic maintenance - Nix packages should be usable both for deployment and development - provide reproducible (but not necessary hermetic) builds of Erlang packages and all their both Erlang and non-Erlang dependencies This leads to the following approach: - use Hex.pm as source of packages' tarballs and metadata - exclusively use Rebar3 as a build tool, but not as a package management tool - import latest versions of each package on Hex.pm and their deps' transitive closures - the import needs to be mostly automated and should be able to run in "a cron" (e.g. on Travis) Here's my first stab onto it, focusing purely on Nix part, without working on the scripts/tooling yet: https://github.com/NixOS/nixpkgs/pull/11614 If there are Nix-ers here, please take a look.and comment. If you are curious about details, feel free to ask questions. Best regards, Gleb Peregud On Sun, Jan 4, 2015 at 12:01 PM, Tuncer Ayaz wrote: > On Sat, Jan 3, 2015 at 6:16 PM, Tuncer Ayaz wrote: > > On Sat, Jan 3, 2015 at 3:54 PM, Bjorn-Egil Dahlberg wrote: > > > Hi everyone! > > [...] > > > > 3. Protocols first. I saw someone mentioning this already and it > > > made me so happy. Protocols first is perhaps a misnomer but it is > > > important anyway. We have several languages that runs ontop of > > > beam or Erlang. Elixir has its own package manger Hex (client) and > > > Hexweb (server) and I think it is a good thing if we use > > > compatible protocols. Http and restful protocols is all the rage > > > and I don't any reason to sidestep that. My vision here is a > > > common protocol bus and the client can query any server for Erlang > > > applications, i.e. Hex can query an Erlang Application provider > > > for updates and install them. I think this only affect us (Erlang) > > > by pressuring us to formalize our rest protocols / api .. but that > > > is a good thing. > > > > As long as we're not held back by limitations of existing protocols, > > and are free to improve/extend, sure, it's a good approach. > > Thinking about, it appears what you're actually suggesting is a smart > http protocol which would disallow easy mirroring. If that's the case, > then it's a bad idea. > > I've explained this in my initial response to Bruce, but I'll try to > summarize the benefits of a simple files+dirs structure served via > http/ftp/_whatever_: > > * Like mirrors for CTAN, anyone who wishes to host a mirror can do so > easily alongside CTAN, *nix distros, etc., without the need for a > special CGI script or extra http daemon. This greatly increases the > reliability and availability by avoiding the need for a centrally > managed and to-be-accessed smart host that has to be paid for and > administered. No matter how professional and CDN'ed something is, it > will eventually go down for a period. Using a conventionally > mirror'able structure avoids "is foo.erlang.org down?" questions and > broken builds, as users will by convention use mirrors instead of a > central location. For this to be a non-issue, we'd have to have van > Jacobson's content-centric networking (or one of the derivatives) in > addition to more reliable global network links. > > * It's easy to set up public or private mirrors. > > * In case of problems, it gives you the option of restoring or > checking the integrity of files by making use several independent, > trusted mirrors that hadn't been updated to the problematic state yet. > > That said, it's perfectly fine to have one or more registries where > stuff is published to and mirrored from. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto@REDACTED Fri Dec 11 11:32:53 2015 From: roberto@REDACTED (Roberto Ostinelli) Date: Fri, 11 Dec 2015 11:32:53 +0100 Subject: [erlang-questions] Understanding global:set_lock/1,2,3 In-Reply-To: References: Message-ID: Hello Jim, Thank you for this, I'm familiar with the syntax. It is now clear to me that these global locks are merely IDs that one can use to globally block other processes to access the same ID, hence avoiding that the same code runs simultaneously on any node. Thank you, r. -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto@REDACTED Fri Dec 11 11:33:54 2015 From: roberto@REDACTED (Roberto Ostinelli) Date: Fri, 11 Dec 2015 11:33:54 +0100 Subject: [erlang-questions] Understanding global:set_lock/1,2,3 In-Reply-To: <12F2115FD1CCEE4294943B2608A18FA301A273E7C3@MAIL01.win.lbaum.eu> References: <12F2115FD1CCEE4294943B2608A18FA301A273E7C3@MAIL01.win.lbaum.eu> Message-ID: Hi Dmitry and Tobias, Yes - it looks like these are simple mutex. Thank you for your inputs! r. On Thu, Dec 10, 2015 at 1:55 PM, Tobias Schlager < Tobias.Schlager@REDACTED> wrote: > Hi Roberto, > > AFAIK for the user global locks work similar to other mutex/lock > implementation, in your case {?MODULE, lock_mnesia_for_a_while} is your > 'mutex'. Locking this mutex does not automagically lock mnesia or guard > shared data structures. However, a second process trying to aquire/lock the > mutex will be blocked until the current owner releases the lock. > > Please correct me if I'm wrong. > > Regards > Tobias > > ------------------------------ > *Von:* erlang-questions-bounces@REDACTED [ > erlang-questions-bounces@REDACTED]" im Auftrag von "Roberto Ostinelli [ > roberto@REDACTED] > *Gesendet:* Donnerstag, 10. Dezember 2015 12:16 > *An:* Erlang > *Betreff:* [erlang-questions] Understanding global:set_lock/1,2,3 > > Dear list, > I'm trying to get an understanding of what global:set_lock/1,2,3 exactly > does. > > I read from the docs: > > Sets a lock on the specified nodes (or on all nodes if none are specified) > on ResourceId for LockRequesterId. > > > Let's say that I want to perform a series of operations on mnesia schemas > and want to avoid all other nodes accessing mnesia tables while one node is > busy at it. > > Is it enough to write: > > global:trans({{?MODULE, lock_mnesia_for_a_while}, self()}, > fun() -> > do_things_on_mnesia() > end). > > > I don't get how this could lock mnesia for the other nodes. > > Can some kind soul point me in the right direction? > > Thank you, > r. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto@REDACTED Fri Dec 11 11:40:47 2015 From: roberto@REDACTED (Roberto Ostinelli) Date: Fri, 11 Dec 2015 11:40:47 +0100 Subject: [erlang-questions] CT: viewing logs of slave node Message-ID: All, in CT I'm starting a slave node with ct_slave:start/2 [1]. Is there any way that I can see the logs of that node? By default, all logs I can see are the ones of the main CT node. Any ideas appreciated. Thank you, r. [1] http://www.erlang.org/doc/man/ct_slave.html#start-2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kosio1986@REDACTED Fri Dec 11 09:14:33 2015 From: kosio1986@REDACTED (Konstantin Klisurski) Date: Fri, 11 Dec 2015 10:14:33 +0200 Subject: [erlang-questions] Silent installation on windows Message-ID: Hi all, I am writing a puppet module for downloading and installing erlang on Windows. I am using otp_win64_R15B01.exe, which uses the Nullsoft installer, so I am passing /S for silent and this indeed works for the Nullsoft installer, but it starts installing Erlang, then pauses, triggers an Visual C++ Redistributable installation prompt, which is included in the erlang .exe and waits for input. So, I am stuck with the unattended installation, because I cannot find a way to bypass the VC++R GUI prompt... I tried to install VC++R before installing the Erland package, but it AGAIN triggers the VC++R exe and asks me if I want to Repair or Uninstall or Cancel. I wasn't able to find a way to pass a parameter from the Nullsoft installer to the child one (InstallShield for instance can do that). So is there anyone that managed to deal with that, or found another way to install erlang on Windows unattended (I need the R15B01 version and preffer downloading it directly from the site, but if there is another package that is capable of doing that I might re-assess the version I am using). Best regards, Konstantin -------------- next part -------------- An HTML attachment was scrubbed... URL: From kvratkin@REDACTED Fri Dec 11 11:53:46 2015 From: kvratkin@REDACTED (Kirill Ratkin) Date: Fri, 11 Dec 2015 13:53:46 +0300 Subject: [erlang-questions] Record fields Message-ID: Hi! I'm trying to get records fields at runtime. If I have record 'test' and I can do: record_info(fields, test). But if I do: R = test, record_info(fields, R). I get error : * 1: illegal record info. Documentation says: To each module using records, a pseudo function is added during compilation to obtain information about records. It seems it's really 'pseudo' function .... Is there alternative way to get record fields in runtime? -------------- next part -------------- An HTML attachment was scrubbed... URL: From technion@REDACTED Fri Dec 11 12:12:54 2015 From: technion@REDACTED (Technion) Date: Fri, 11 Dec 2015 11:12:54 +0000 Subject: [erlang-questions] Certificate alerting tool Message-ID: Hi, I've received a huge amount of help from this list recently and I want to thank everyone involved. For anyone interested in the (current) final result, I've released this tool, which is an open source, proactive alerting service for Google's certificate transparency. https://github.com/technion/ct_advisor [https://avatars2.githubusercontent.com/u/1948596?v=3&s=400] technion/ct_advisor ? GitHub github.com ct_advisor - A monitoring service for Certificate Transparency Any opinions on the codebase would be appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergej.jurecko@REDACTED Fri Dec 11 12:19:30 2015 From: sergej.jurecko@REDACTED (=?UTF-8?Q?Sergej_Jure=C4=8Dko?=) Date: Fri, 11 Dec 2015 12:19:30 +0100 Subject: [erlang-questions] Record fields In-Reply-To: References: Message-ID: No. Sergej On Fri, Dec 11, 2015 at 11:53 AM, Kirill Ratkin wrote: > Hi! > > I'm trying to get records fields at runtime. > If I have record 'test' and I can do: > record_info(fields, test). > > But if I do: > R = test, > record_info(fields, R). > > I get error : > * 1: illegal record info. > > Documentation says: To each module using records, a pseudo function is > added during compilation to obtain information about records. > > It seems it's really 'pseudo' function .... > > Is there alternative way to get record fields in runtime? > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Fri Dec 11 12:24:05 2015 From: zxq9@REDACTED (zxq9) Date: Fri, 11 Dec 2015 20:24:05 +0900 Subject: [erlang-questions] Record fields In-Reply-To: References: Message-ID: <4006665.vIFaWqqs3D@changa> On 2015?12?11? ??? 13:53:46 Kirill Ratkin wrote: > Hi! > > I'm trying to get records fields at runtime. > If I have record 'test' and I can do: > record_info(fields, test). > > But if I do: > R = test, > record_info(fields, R). > > I get error : > * 1: illegal record info. > > Documentation says: To each module using records, a pseudo function is > added during compilation to obtain information about records. > > It seems it's really 'pseudo' function .... > > Is there alternative way to get record fields in runtime? Not exactly, no. Records are converted to tuples, not hashes, and the way field labels are expanded is a transformation performed at compile time, when the code is converted and record syntax is all magicked away. Consider this: ===================================================================== ceverett@REDACTED:~/Code/erlang$ cat recfind.erl -module(recfind). -export([find_by_phone/2, find_by_mail/2, fields/0]). -record(contact, {fname, lname, phone=[], mail=[], city=[], street=[]}). find_by_phone(Phone, AddressBook) -> find(Phone, #contact.phone, AddressBook). find_by_mail(Mail, AddressBook) -> find(Mail, #contact.mail, AddressBook). find(Value, Field, AddressBook) -> case lists:keyfind(Value, Field, AddressBook) of #contact{fname = Fname, lname = Lname} -> {Fname, Lname}; false -> {error, not_found} end. fields() -> Fields = record_info(fields, contact), ok = io:format("Fields: ~tp~n", [Fields]). ceverett@REDACTED:~/Code/erlang$ erlc -E recfind.erl ceverett@REDACTED:~/Code/erlang$ cat recfind.E -file("recfind.erl", 1). find_by_phone(Phone, AddressBook) -> find(Phone, 4, AddressBook). find_by_mail(Mail, AddressBook) -> find(Mail, 5, AddressBook). find(Value, Field, AddressBook) -> case lists:keyfind(Value, Field, AddressBook) of {contact,Fname,Lname,_,_,_,_} -> {Fname,Lname}; false -> {error,not_found} end. fields() -> Fields = [fname,lname,phone,mail,city,street], ok = io:format("Fields: ~tp~n", [Fields]). module_info() -> erlang:get_module_info(recfind). module_info(X) -> erlang:get_module_info(recfind, X). ===================================================================== You see what happened in the function fields/0? Fields = record_info(fields, contact), became Fields = [fname,lname,phone,mail,city,street], and find(Mail, #contact.mail, AddressBook) became find(Phone, 4, AddressBook). and #contact{fname = Fname, lname = Lname} became {contact,Fname,Lname,_,_,_,_} So, the basic answer to your question is "no". But the more interesting answer is to ask you another question: What is the effect you are trying to achieve by using record_info/2 at runtime? Maybe a different data structure is better, maybe the #record.accessor syntax is sufficient, and maybe what you are trying to do can be done better some other way? -Craig From sergej.jurecko@REDACTED Fri Dec 11 13:35:41 2015 From: sergej.jurecko@REDACTED (=?UTF-8?Q?Sergej_Jure=C4=8Dko?=) Date: Fri, 11 Dec 2015 13:35:41 +0100 Subject: [erlang-questions] Silent installation on windows In-Reply-To: References: Message-ID: What we do with erlang on windows is simply take the erts folder from program files, remove the erl.ini files that are next to erl.exe and ship apps with Erlang as part of the package. That way you can manually install VC runtime. Sergej On Fri, Dec 11, 2015 at 9:14 AM, Konstantin Klisurski wrote: > Hi all, > > I am writing a puppet module for downloading and installing erlang on > Windows. I am using otp_win64_R15B01.exe, which uses the Nullsoft > installer, so I am passing /S for silent and this indeed works for the > Nullsoft installer, but it starts installing Erlang, then pauses, triggers > an Visual C++ Redistributable installation prompt, which is included in the > erlang .exe and waits for input. So, I am stuck with the unattended > installation, because I cannot find a way to bypass the VC++R GUI prompt... > I tried to install VC++R before installing the Erland package, but it AGAIN > triggers the VC++R exe and asks me if I want to Repair or Uninstall or > Cancel. I wasn't able to find a way to pass a parameter from the Nullsoft > installer to the child one (InstallShield for instance can do that). So is > there anyone that managed to deal with that, or found another way to > install erlang on Windows unattended (I need the R15B01 version and preffer > downloading it directly from the site, but if there is another package that > is capable of doing that I might re-assess the version I am using). > > Best regards, > Konstantin > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donpedrothird@REDACTED Fri Dec 11 15:28:04 2015 From: donpedrothird@REDACTED (John Doe) Date: Fri, 11 Dec 2015 17:28:04 +0300 Subject: [erlang-questions] Record fields In-Reply-To: References: Message-ID: you can use this https://github.com/uwiger/parse_trans/blob/master/src/exprecs.erl parse transform. It generates access functions for records 2015-12-11 13:53 GMT+03:00 Kirill Ratkin : > Hi! > > I'm trying to get records fields at runtime. > If I have record 'test' and I can do: > record_info(fields, test). > > But if I do: > R = test, > record_info(fields, R). > > I get error : > * 1: illegal record info. > > Documentation says: To each module using records, a pseudo function is > added during compilation to obtain information about records. > > It seems it's really 'pseudo' function .... > > Is there alternative way to get record fields in runtime? > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From montuori@REDACTED Fri Dec 11 19:35:06 2015 From: montuori@REDACTED (Kevin Montuori) Date: Fri, 11 Dec 2015 12:35:06 -0600 Subject: [erlang-questions] maps:keys/1, values/1 stability? Message-ID: Hi All -- The maps documenation indicates that keys/1 and values/1 will return results in arbitrary order, which makes sense. I'm wondering if they'll return in the *same* arbitrary order. In other words, is this assertion always correct? M = maps:from_list(lists:zip(maps:keys(M), maps:values(M))). Thanks for any insight. k. -- Kevin Montuori montuori@REDACTED From siraaj@REDACTED Fri Dec 11 21:15:41 2015 From: siraaj@REDACTED (Siraaj Khandkar) Date: Fri, 11 Dec 2015 15:15:41 -0500 Subject: [erlang-questions] lager changelog? Message-ID: <566B2EED.6010408@khandkar.net> Is anyone aware of anything like a changelog for lager? I did not see anything obvious at https://github.com/basho/lager More-specifically, I inherited a project which uses 2.0.0rc1 and am wondering what surprises and API changes await me if I wanted to upgrade. From technion@REDACTED Fri Dec 11 23:09:58 2015 From: technion@REDACTED (Technion) Date: Fri, 11 Dec 2015 22:09:58 +0000 Subject: [erlang-questions] lager changelog? In-Reply-To: <566B2EED.6010408@khandkar.net> References: <566B2EED.6010408@khandkar.net> Message-ID: Hi, Given that the Lager readme is pretty good, you can get a pretty good answer on this by reviewing changes to the readme. $ git clone https://github.com/basho/lager.git $ cd lager $ git diff 2.0.0rc2 3.0.2 -- README.md The diff will include any API changes. ________________________________________ From: erlang-questions-bounces@REDACTED on behalf of Siraaj Khandkar Sent: Saturday, 12 December 2015 7:15 AM To: erlang-questions@REDACTED Subject: [erlang-questions] lager changelog? Is anyone aware of anything like a changelog for lager? I did not see anything obvious at https://github.com/basho/lager More-specifically, I inherited a project which uses 2.0.0rc1 and am wondering what surprises and API changes await me if I wanted to upgrade. _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions From skribent_har@REDACTED Sat Dec 12 08:02:14 2015 From: skribent_har@REDACTED (Martin) Date: Sat, 12 Dec 2015 07:02:14 +0000 Subject: [erlang-questions] Question about Erlang and Ada Message-ID: Hi everyone I have a project in the field of robotics were I consider using Erlang and SWI Prolog for a real time system. However since I am open for all kind of input, I wrote to a company that sells Ada solutions (since the language is made for critical systems) and asked them to make a case for Ada vs Erlang. They wrote back that they didn't know enough about Erlang to comment on the "let it crash" philosophy but they wrote that: Ada philosophy is "build is correct". That's achieved through an extensive specification language (including contract-based programming) together with dynamic and static verification techniques. So my question is: Do you think that there are times when Adas philosophy is better then Erlang, in a real time system, or is the Erlang model always better? Appreciate all help I can get Best regards Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From v@REDACTED Sat Dec 12 08:29:49 2015 From: v@REDACTED (Valentin Micic) Date: Sat, 12 Dec 2015 09:29:49 +0200 Subject: [erlang-questions] Question about Erlang and Ada In-Reply-To: References: Message-ID: PANTA RHEI ! "No man ever steps in the same river twice, for it's not the same river and he's not the same man." (Heraclitus, 535-475 BC) Seeing the above, which approach do you think would be more appropriate to what you're trying to achieve? In my view: "let it crash" will force you to adjust the "man" that enters the river (you can always learn something from the crash). By the same analogy, "build is correct", will ignore the river (if the build is correct, changes to the river, and hence the river, are irrelevant). The same should hold for robots. I think. V/ On 12 Dec 2015, at 9:02 AM, Martin wrote: > > Hi everyone > > I have a project in the field of robotics were I consider using Erlang and SWI Prolog for a real time system. However since I am open for all kind of input, I wrote to a company that sells Ada solutions (since the language is made for critical systems) and asked them to make a case for Ada vs Erlang. They wrote back that they didn't know enough about Erlang to comment on the "let it crash" philosophy but they wrote that: > > Ada philosophy is "build is correct". That's achieved through an extensive specification language (including contract-based programming) together with dynamic and static verification techniques. > > So my question is: > Do you think that there are times when Adas philosophy is better then Erlang, in a real time system, or is the Erlang model always better? > > Appreciate all help I can get > > Best regards > > Martin > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony@REDACTED Sat Dec 12 11:52:16 2015 From: tony@REDACTED (Tony Rogvall) Date: Sat, 12 Dec 2015 11:52:16 +0100 Subject: [erlang-questions] maps:keys/1, values/1 stability? In-Reply-To: References: Message-ID: Maybe these examples can shed some light on the order issue? check the output on the expressions below on R18. maps:keys(maps:from_list([{I,I} || I <- lists:seq(1,32)])). maps:keys(maps:from_list([{I,I} || I <- lists:seq(1,33)])). The assertion is however correct ( I hope :-) . /Tony > On 11 dec 2015, at 19:35, Kevin Montuori wrote: > > > Hi All -- > > The maps documenation indicates that keys/1 and values/1 will return > results in arbitrary order, which makes sense. I'm wondering if they'll > return in the *same* arbitrary order. In other words, is this assertion > always correct? > > M = maps:from_list(lists:zip(maps:keys(M), maps:values(M))). > > Thanks for any insight. > > k. > > -- > Kevin Montuori > montuori@REDACTED > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dmkolesnikov@REDACTED Sat Dec 12 12:03:35 2015 From: dmkolesnikov@REDACTED (Dmitry Kolesnikov) Date: Sat, 12 Dec 2015 13:03:35 +0200 Subject: [erlang-questions] Question about Erlang and Ada In-Reply-To: References: Message-ID: <8A5ABCB5-BFAD-44DD-8E7D-4D3561388C03@gmail.com> Hello, There was an excellent post about Erlang an its possible place within Curiousity http://jlouisramblings.blogspot.fi/2012/08/getting-25-megalines-of-code-to-behave.html "Some of the traits of the Curiosity Rovers software closely resembles the architecture of Erlang. Are these traits basic for writing robust software?" ?Let it crash? phylosophy allows to react on failures caused by environment change. The ?build it correct? requires you to predict the environment behavior well in advance. I think the concept of supervisor tree will help you to build a consistent state of you application. Properly defined supervision strategy enforces state ?correctness?. Some time ago, I?ve looked into Erlang for robotics (drones) on the level of hobby project. It would be great to understand your application :-) Best Regards, Dmitry > On Dec 12, 2015, at 9:29 AM, Valentin Micic wrote: > > PANTA RHEI ! > > "No man ever steps in the same river twice, for it's not the same river and he's not the same man." > (Heraclitus, 535-475 BC) > > Seeing the above, which approach do you think would be more appropriate to what you're trying to achieve? > > In my view: "let it crash" will force you to adjust the "man" that enters the river (you can always learn something from the crash). > By the same analogy, "build is correct", will ignore the river (if the build is correct, changes to the river, and hence the river, are irrelevant). > The same should hold for robots. I think. > > V/ > > > On 12 Dec 2015, at 9:02 AM, Martin wrote: > >> >> Hi everyone >> >> I have a project in the field of robotics were I consider using Erlang and SWI Prolog for a real time system. However since I am open for all kind of input, I wrote to a company that sells Ada solutions (since the language is made for critical systems) and asked them to make a case for Ada vs Erlang. They wrote back that they didn't know enough about Erlang to comment on the "let it crash" philosophy but they wrote that: >> >> Ada philosophy is "build is correct". That's achieved through an extensive specification language (including contract-based programming) together with dynamic and static verification techniques. >> >> So my question is: >> Do you think that there are times when Adas philosophy is better then Erlang, in a real time system, or is the Erlang model always better? >> >> Appreciate all help I can get >> >> Best regards >> >> Martin >> >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From nathaniel@REDACTED Sat Dec 12 13:16:12 2015 From: nathaniel@REDACTED (Nathaniel Waisbrot) Date: Sat, 12 Dec 2015 07:16:12 -0500 Subject: [erlang-questions] maps:keys/1, values/1 stability? In-Reply-To: References: Message-ID: The implementation in 18.1 ( https://github.com/erlang/otp/blob/1523be48ab4071b158412f4b06fe9c8d6ba3e73c/erts/emulator/beam/erl_map.c#L2299-L2331) certainly looks like they use the same ordering. However, it would _never_ be a good idea to depend on that behavior. If you need to loop over key/value pairs, use maps:map/2 or maps:fold/3 or maps:to_list/1. On Fri, Dec 11, 2015 at 1:35 PM, Kevin Montuori wrote: > > Hi All -- > > The maps documenation indicates that keys/1 and values/1 will return > results in arbitrary order, which makes sense. I'm wondering if they'll > return in the *same* arbitrary order. In other words, is this assertion > always correct? > > M = maps:from_list(lists:zip(maps:keys(M), maps:values(M))). > > Thanks for any insight. > > k. > > -- > Kevin Montuori > montuori@REDACTED > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wojtek@REDACTED Sat Dec 12 14:21:59 2015 From: wojtek@REDACTED (=?UTF-8?Q?Wojtek_Narczy=c5=84ski?=) Date: Sat, 12 Dec 2015 14:21:59 +0100 Subject: [erlang-questions] Question about Erlang and Ada In-Reply-To: References: Message-ID: <566C1F77.1050701@power.com.pl> On 12.12.2015 08:02, Martin wrote: > > So my question is: > > Do you think that there are times when Adas philosophy is better then > Erlang, in a real time system, or is the Erlang model always better? > > > For life critical systems (lifts, trains, aircrafts), Ada philosophy is, to put it gently, better. Bare Ada won't get you there, AdaCore was referring to SPARK (Ada + annotations + proofs). But it is also very hard to do. For Lego Mindstorms, you will be fine with Prolog or Erlang. Or Curry. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Sat Dec 12 18:15:32 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Sat, 12 Dec 2015 18:15:32 +0100 Subject: [erlang-questions] maps:keys/1, values/1 stability? In-Reply-To: References: Message-ID: On Sat, Dec 12, 2015 at 1:16 PM, Nathaniel Waisbrot wrote: > However, it would _never_ be a good idea to depend on that behavior. This would be my bet too. One could easily imagine a future in which garbage collection would compress data down and simplify the heap. This could mean reordering in hash-like structures such as HAMT. Another case could be to solve highly colliding terms by running a hash family and rehashing in certain situations. If your code relies on the order, the design space is much smaller than it is right now. -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From montuori@REDACTED Sat Dec 12 22:13:39 2015 From: montuori@REDACTED (Kevin Montuori) Date: Sat, 12 Dec 2015 15:13:39 -0600 Subject: [erlang-questions] maps:keys/1, values/1 stability? In-Reply-To: (Jesper Louis Andersen's message of "Sat, 12 Dec 2015 18:15:32 +0100") References: Message-ID: Thanks Nathanial, Jesper, &al for the replies. Absent documentation to the contrary I agree with you entirely but was curious. I hadn't considered that the items might be rehashed during the (seemingly immutable) map's lifetime. I appreciate the the insights. k. -- Kevin Montuori montuori@REDACTED From kvratkin@REDACTED Sun Dec 13 08:33:57 2015 From: kvratkin@REDACTED (Kirill Ratkin) Date: Sun, 13 Dec 2015 02:33:57 -0500 Subject: [erlang-questions] Question about Erlang and Ada In-Reply-To: <566C1F77.1050701@power.com.pl> References: <566C1F77.1050701@power.com.pl> Message-ID: It's real life example, about lisp, but lisp and Erlang have similar nature. (Yes, CL doesn't support 'let it crash', but anyway they are similar.) An even more impressive instance of remote debugging occurred on NASA's 1998 Deep Space 1 mission. A half year after the space craft launched, a bit of Lisp code was going to control the spacecraft for two days while conducting a sequence of experiments. Unfortunately, a subtle race condition in the code had escaped detection during ground testing and was already in space. When the bug manifested in the wild--100 million miles away from Earth--the team was able to diagnose and fix the running code, allowing the experiments to complete.14 One of the programmers described it as follows: Debugging a program running on a $100M piece of hardware that is 100 million miles away is an interesting experience. Having a read-eval-print loop running on the spacecraft proved invaluable in finding and fixing the problem. This is from 'practical common lisp' book. Also original link from JPL: http://www.flownet.com/gat/jpl-lisp.html On Dec 12, 2015 4:22 PM, "Wojtek Narczy?ski" wrote: > On 12.12.2015 08:02, Martin wrote: > > > So my question is: > > Do you think that there are times when Adas philosophy is better then > Erlang, in a real time system, or is the Erlang model always better? > > > > For life critical systems (lifts, trains, aircrafts), Ada philosophy is, > to put it gently, better. Bare Ada won't get you there, AdaCore was > referring to SPARK (Ada + annotations + proofs). But it is also very hard > to do. > > For Lego Mindstorms, you will be fine with Prolog or Erlang. Or Curry. > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark@REDACTED Mon Dec 14 04:33:37 2015 From: mark@REDACTED (Mark Steele) Date: Sun, 13 Dec 2015 22:33:37 -0500 Subject: [erlang-questions] String parsing recommendations Message-ID: Hi all, This is a pretty basic question, but I'm new so bear with me. I've got a binary that might look something like this: <<"(foo,bar(faz,fez,fur(foe)),fuz)">> Parsed, the it might look like: [ [<<"foo">>], [<<"bar>>,<<"faz">>], [<<"bar">>,<<"fez">>], [<<"bar">>,<<"fur">>,<<"foe">>], [<<"fuz">>] ] Any recommendations on the best approach to tackle this in Erlang? Regards, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Mon Dec 14 05:02:24 2015 From: zxq9@REDACTED (zxq9) Date: Mon, 14 Dec 2015 13:02:24 +0900 Subject: [erlang-questions] String parsing recommendations In-Reply-To: References: Message-ID: <1468783.uT1lQZLqR7@changa> On 2015?12?13? ??? 22:33:37 Mark Steele wrote: > Hi all, > > This is a pretty basic question, but I'm new so bear with me. > > I've got a binary that might look something like this: > > <<"(foo,bar(faz,fez,fur(foe)),fuz)">> > > Parsed, the it might look like: > [ > [<<"foo">>], > [<<"bar>>,<<"faz">>], > [<<"bar">>,<<"fez">>], > [<<"bar">>,<<"fur">>,<<"foe">>], > [<<"fuz">>] > ] > > > Any recommendations on the best approach to tackle this in Erlang? I'm a little confused about what you are asking. Can you provide some example input, and some example output? It seems like you're asking how to write a function that takes input X and returns output Y...? -Craig From mikpelinux@REDACTED Mon Dec 14 10:27:57 2015 From: mikpelinux@REDACTED (Mikael Pettersson) Date: Mon, 14 Dec 2015 10:27:57 +0100 Subject: [erlang-questions] String parsing recommendations In-Reply-To: References: Message-ID: <22126.35741.750950.255610@gargle.gargle.HOWL> Mark Steele writes: > Hi all, > > This is a pretty basic question, but I'm new so bear with me. > > I've got a binary that might look something like this: > > <<"(foo,bar(faz,fez,fur(foe)),fuz)">> > > Parsed, the it might look like: > [ > [<<"foo">>], > [<<"bar>>,<<"faz">>], > [<<"bar">>,<<"fez">>], > [<<"bar">>,<<"fur">>,<<"foe">>], > [<<"fuz">>] > ] > > > Any recommendations on the best approach to tackle this in Erlang? The input language is clearly not regular, so you'd need something stronger than regular expressions for parsing. In this case a context-free parser should work. Personally I'd implement a separate scanner which returns tokens and updated input, and a recursive descent parser which handles the grammar and produces the output. Your desired output is not like a parse tree but more like an evaluation of the parse tree (bar(faz,fez,...) expands to a different structure), but that expansion looks trivial so I'd let the parser do it -- otherwise an intermediate parse tree and a separate evaluator will work. There are scanner and parser generating tools for Erlang, but they would be overkill for this simple language -- unless you're new to language processing, in which case they could help by providing examples and imposing a sensible structure to the solution. The re module could be used in a hand-written scanner. Alas, no Erlang magic here, just ordinary compiler frontend code. /Mikael From wojtek@REDACTED Mon Dec 14 12:14:10 2015 From: wojtek@REDACTED (=?UTF-8?Q?Wojtek_Narczy=c5=84ski?=) Date: Mon, 14 Dec 2015 12:14:10 +0100 Subject: [erlang-questions] Question about Erlang and Ada In-Reply-To: References: <566C1F77.1050701@power.com.pl> Message-ID: <566EA482.3080807@power.com.pl> W dniu 2015-12-13 o 08:33, Kirill Ratkin pisze: > > It's real life example, about lisp, but lisp and Erlang have similar > nature. > > Interesting. Cool even. But hardly real-time. I still insist that there is need for both: "let it crash" and "correct by construction". You wouldn't want to let your fly-by-wire system controller crash during landing, one meter above the runway. But you also wouldn't want to build a correct feature poor in-flight entertainment system. -- Wojtek From marc@REDACTED Mon Dec 14 12:31:12 2015 From: marc@REDACTED (Marc Worrell) Date: Mon, 14 Dec 2015 12:31:12 +0100 Subject: [erlang-questions] [ANN] Zotonic release 0.13.6 Message-ID: <448758F5-3406-4930-AAB3-D21F3AC24FFE@worrell.nl> Hi, Zotonic is the Erlang Content Management System and Framework. We just released Zotonic 0.13.6 http://zotonic.com/docs/latest/dev/releasenotes/rel_0.13.6.html This is a maintenance release of Zotonic 0.13 Besides the usual bug fixes and other maintenance there are some more changes: * Fix the deps version lock by adding USE_REBAR_LOCKED * New bin/zotonic command to start without a shell: start-nodaemon * Overview of edges in the mod_admin * New module mod_admin_merge to merge resources * Retained messages for ~pagesession topics * Performance fixes for api calls * Significant better revision history in mod_backup * Add no_touch option to m_rsc:update/4 If you update then also ensure that you have the accompanying z_stdlib and webzmachine dependencies. A big thank you for everybody who contributed code, ideas, and bug reports. All are valuable for the continuing progress of Zotonic. Greetings from the Zotonic team. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark@REDACTED Mon Dec 14 16:35:55 2015 From: mark@REDACTED (Mark Steele) Date: Mon, 14 Dec 2015 10:35:55 -0500 Subject: [erlang-questions] erlang-questions Digest, Vol 248, Issue 1 In-Reply-To: References: Message-ID: > > > Message: 1 > Date: Sun, 13 Dec 2015 22:33:37 -0500 > From: Mark Steele > To: erlang-questions@REDACTED > Subject: [erlang-questions] String parsing recommendations > Message-ID: > j_qy3PFF+fGYxHWLOq2NfGc_H-ciK+_TKnggR+4nRmM8BQ@REDACTED> > Content-Type: text/plain; charset="utf-8" > > Hi all, > > This is a pretty basic question, but I'm new so bear with me. > > I've got a binary that might look something like this: > > <<"(foo,bar(faz,fez,fur(foe)),fuz)">> > > Parsed, the it might look like: > [ > [<<"foo">>], > [<<"bar>>,<<"faz">>], > [<<"bar">>,<<"fez">>], > [<<"bar">>,<<"fur">>,<<"foe">>], > [<<"fuz">>] > ] > > > Any recommendations on the best approach to tackle this in Erlang? > > Regards, > > Mark > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://erlang.org/pipermail/erlang-questions/attachments/20151213/6e75379c/attachment-0001.html > > > > ------------------------------ > > Message: 2 > Date: Mon, 14 Dec 2015 13:02:24 +0900 > From: zxq9 > To: erlang-questions@REDACTED > Subject: Re: [erlang-questions] String parsing recommendations > Message-ID: <1468783.uT1lQZLqR7@REDACTED> > Content-Type: text/plain; charset="utf-8" > > On 2015?12?13? ??? 22:33:37 Mark Steele wrote: > > Hi all, > > > > This is a pretty basic question, but I'm new so bear with me. > > > > I've got a binary that might look something like this: > > > > <<"(foo,bar(faz,fez,fur(foe)),fuz)">> > > > > Parsed, the it might look like: > > [ > > [<<"foo">>], > > [<<"bar>>,<<"faz">>], > > [<<"bar">>,<<"fez">>], > > [<<"bar">>,<<"fur">>,<<"foe">>], > > [<<"fuz">>] > > ] > > > > > > Any recommendations on the best approach to tackle this in Erlang? > > I'm a little confused about what you are asking. Can you provide some > example input, and some example output? It seems like you're asking > how to write a function that takes input X and returns output Y...? > > -Craig > > > More along the lines of best practices and tools (eg: lex/yecc, neotoma, etc...). I'm good, did some additional research. > ------------------------------ > > Message: 3 > Date: Mon, 14 Dec 2015 10:27:57 +0100 > From: Mikael Pettersson > To: Mark Steele > Cc: erlang-questions@REDACTED > Subject: Re: [erlang-questions] String parsing recommendations > Message-ID: <22126.35741.750950.255610@REDACTED> > Content-Type: text/plain; charset=us-ascii > > Mark Steele writes: > > Hi all, > > > > This is a pretty basic question, but I'm new so bear with me. > > > > I've got a binary that might look something like this: > > > > <<"(foo,bar(faz,fez,fur(foe)),fuz)">> > > > > Parsed, the it might look like: > > [ > > [<<"foo">>], > > [<<"bar>>,<<"faz">>], > > [<<"bar">>,<<"fez">>], > > [<<"bar">>,<<"fur">>,<<"foe">>], > > [<<"fuz">>] > > ] > > > > > > Any recommendations on the best approach to tackle this in Erlang? > > The input language is clearly not regular, so you'd need > something stronger than regular expressions for parsing. > In this case a context-free parser should work. > > Personally I'd implement a separate scanner which returns tokens > and updated input, and a recursive descent parser which handles > the grammar and produces the output. Your desired output is not > like a parse tree but more like an evaluation of the parse tree > (bar(faz,fez,...) expands to a different structure), but that > expansion looks trivial so I'd let the parser do it -- otherwise > an intermediate parse tree and a separate evaluator will work. > > There are scanner and parser generating tools for Erlang, but > they would be overkill for this simple language -- unless you're > new to language processing, in which case they could help by > providing examples and imposing a sensible structure to the > solution. The re module could be used in a hand-written scanner. > > Alas, no Erlang magic here, just ordinary compiler frontend code. > > /Mikael > > > Yup, thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok@REDACTED Mon Dec 14 23:47:09 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Tue, 15 Dec 2015 11:47:09 +1300 Subject: [erlang-questions] Question about Erlang and Ada In-Reply-To: References: Message-ID: <6EFFEA51-5D68-477E-99C4-7DB7AF8B19AE@cs.otago.ac.nz> On 12/12/2015, at 8:02 pm, Martin wrote: > I have a project in the field of robotics were I consider using Erlang and SWI Prolog for a real time system. Erlang is described as "soft real-time". SWI Prolog isn't real-time in any sense. It has some rather strange thread support, but then, so does C. > However since I am open for all kind of input, I wrote to a company that sells Ada solutions (since the language is made for critical systems) and asked them to make a case for Ada vs Erlang. They wrote back that they didn't know enough about Erlang to comment on the "let it crash" philosophy but they wrote that: > > Ada philosophy is "build is correct". That's achieved through an extensive specification language (including contract-based programming) together with dynamic and static verification techniques. Well, kind of. There was no contract-based programming in Ada 81, Ada 83, or Ada 95. Ada 2012 *does* support it. But for years if you wanted "an extensive specification language", SPARK was what you had to go for, and SPARK doesn't include all of Ada. (No pointers and no exception handling, for example. See http://docs.adacore.com/spark2014-docs/html/ug/spark_2014.html for details.) > So my question is: > Do you think that there are times when Adas philosophy is better then Erlang, in a real time system, or is the Erlang model always better? I think you are setting up a false dichotomy here. One of the core features of Erlang is that processes cannot touch each other's memory, so that processes communicate by a form of message passing with a kind of selective receive. Erlang programs can be carved up with processes running on different machines with very little impact on the source code. One of the core features of Ada was that tasks cannot change each other's memory, so that tasks communicate by a form of message passing with a kind of selective receive. Ada programs can be carved up with tasks running on different machines with very little impact on the source code. Ada 95 added 'protected records' which are 'monitors' in the traditional (not the Java) sense, but this doesn't really change the basic model. Since you mention robotics, let me mention Rodney Brooks. Think of a robot architecture divided into layers, where there are low level sensorimotor thingies and high level goal satisfaction thingies. The low level thingies are concerned with hardware interfaces and issues like reporting events and ensuring that the motors don't burn out. This is a perfect application for Ada and yes SPARK and this is stuff where hard real-time applies. The high level thingies are dealing with modelling the world and planning. Your goals may suddenly change from "I'd like a chocolate bar" to "oh ---- oh ---- I'm going to DIE what do I DO?" or something else may go wrong and throwing away your current plans and replanning from the current situation is often a good thing to do if you can afford it. Letting high level thingies crash and restarting them makes sense. Hello Erlang model. Here's an analogy: going to sleep interrupts our scheming but not our breathing. You really don't want breathing to "crash". And then of course we come to economics. Ada was intended to be used in spacecraft, military aeroplanes, all sorts of things where the cost of failure could be so enormous that it pays to spend a *lot* of effort trying to make your software reliable. Spending $2000 of programmer time to reduce the likelihood of a $1000 robot burning out a motor, that doesn't make quite as much sense. "Build is right" means that the implementation correctly implements the specification. But the specification could be wrong, and the world could change. "Let it crash" means that when the implementation doesn't match the real world, don't try to patch your data structures, because amongst other things, your ideas about *how* to patch the data structures are based on the same ideas that have just turned out to be wrong. I really do not see any conflict between these two. Note in particular that while Erlang's type system is nowhere near as strong as Ada's, it now *has* a type system, which is a poor man's specification language, and that Thomas Arts has done work on verifying Erlang. From skribent_har@REDACTED Tue Dec 15 01:45:48 2015 From: skribent_har@REDACTED (Martin) Date: Tue, 15 Dec 2015 00:45:48 +0000 Subject: [erlang-questions] Question about Erlang and Ada In-Reply-To: <6EFFEA51-5D68-477E-99C4-7DB7AF8B19AE@cs.otago.ac.nz> References: , <6EFFEA51-5D68-477E-99C4-7DB7AF8B19AE@cs.otago.ac.nz> Message-ID: Hi everyone, Martin here Just want to say thanks to all of you for your good answers. :-) I should have written Ada Spark because that was what I meant. Think I get the picture now. I will use all three even if it's expensive. Maybe four... must be a way to squeeze in my beloved Prolog without making it unstable. The reasoning functions are so easy to program with that. Going to make it open source in the future. Maybe I can post a link here when it's in alpha? Big thanks Martin ________________________________________ Fr?n: Richard A. O'Keefe Skickat: den 14 december 2015 23:47 Till: Martin Kopia: erlang-questions@REDACTED ?mne: Re: [erlang-questions] Question about Erlang and Ada On 12/12/2015, at 8:02 pm, Martin wrote: > I have a project in the field of robotics were I consider using Erlang and SWI Prolog for a real time system. Erlang is described as "soft real-time". SWI Prolog isn't real-time in any sense. It has some rather strange thread support, but then, so does C. > However since I am open for all kind of input, I wrote to a company that sells Ada solutions (since the language is made for critical systems) and asked them to make a case for Ada vs Erlang. They wrote back that they didn't know enough about Erlang to comment on the "let it crash" philosophy but they wrote that: > > Ada philosophy is "build is correct". That's achieved through an extensive specification language (including contract-based programming) together with dynamic and static verification techniques. Well, kind of. There was no contract-based programming in Ada 81, Ada 83, or Ada 95. Ada 2012 *does* support it. But for years if you wanted "an extensive specification language", SPARK was what you had to go for, and SPARK doesn't include all of Ada. (No pointers and no exception handling, for example. See http://docs.adacore.com/spark2014-docs/html/ug/spark_2014.html for details.) > So my question is: > Do you think that there are times when Adas philosophy is better then Erlang, in a real time system, or is the Erlang model always better? I think you are setting up a false dichotomy here. One of the core features of Erlang is that processes cannot touch each other's memory, so that processes communicate by a form of message passing with a kind of selective receive. Erlang programs can be carved up with processes running on different machines with very little impact on the source code. One of the core features of Ada was that tasks cannot change each other's memory, so that tasks communicate by a form of message passing with a kind of selective receive. Ada programs can be carved up with tasks running on different machines with very little impact on the source code. Ada 95 added 'protected records' which are 'monitors' in the traditional (not the Java) sense, but this doesn't really change the basic model. Since you mention robotics, let me mention Rodney Brooks. Think of a robot architecture divided into layers, where there are low level sensorimotor thingies and high level goal satisfaction thingies. The low level thingies are concerned with hardware interfaces and issues like reporting events and ensuring that the motors don't burn out. This is a perfect application for Ada and yes SPARK and this is stuff where hard real-time applies. The high level thingies are dealing with modelling the world and planning. Your goals may suddenly change from "I'd like a chocolate bar" to "oh ---- oh ---- I'm going to DIE what do I DO?" or something else may go wrong and throwing away your current plans and replanning from the current situation is often a good thing to do if you can afford it. Letting high level thingies crash and restarting them makes sense. Hello Erlang model. Here's an analogy: going to sleep interrupts our scheming but not our breathing. You really don't want breathing to "crash". And then of course we come to economics. Ada was intended to be used in spacecraft, military aeroplanes, all sorts of things where the cost of failure could be so enormous that it pays to spend a *lot* of effort trying to make your software reliable. Spending $2000 of programmer time to reduce the likelihood of a $1000 robot burning out a motor, that doesn't make quite as much sense. "Build is right" means that the implementation correctly implements the specification. But the specification could be wrong, and the world could change. "Let it crash" means that when the implementation doesn't match the real world, don't try to patch your data structures, because amongst other things, your ideas about *how* to patch the data structures are based on the same ideas that have just turned out to be wrong. I really do not see any conflict between these two. Note in particular that while Erlang's type system is nowhere near as strong as Ada's, it now *has* a type system, which is a poor man's specification language, and that Thomas Arts has done work on verifying Erlang. From paul.wolf23@REDACTED Tue Dec 15 01:45:26 2015 From: paul.wolf23@REDACTED (Paul Wolf) Date: Tue, 15 Dec 2015 01:45:26 +0100 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program Message-ID: Hi, I am totally new to Erlang and functional programming in general and tried to build a little program, after I read most of the "sequential programming" part of the Joe Armstrong book. While my program seems to work, I have still _big_ troubles in two departments: - my program is crazy slow as it seems to do a lot of redundant calculations. To me it seems functional and I don't really know how to solve this kind of (performance) issue - while I was able to program the logic in a mostly functional style, I had a hard time writing it and even some minutes after writing it, it is quite hard to read for me As I am totally knew to this stuff (while being an experienced Java developer/software engineer in general), I would highly appreciate some feedback to the following code: The programm basically takes some parameters (hardcoded... i know...) and tells you what your financial possesions are, if you save your money in stocks which in turn have an expected yield, etc., pp. The major part (first ~50 lines) seems fine to me and don't cause me much headache. However as soon as I start considering taxes (which in my country you pay on realized profits), stuff gets worriesome. Could you please review and comment the "tax" functions in particular (on a technical level - not functional)? For calculating the profit you have to keep track of when you bought how much and you have to sell your oldest positions first. For simplification I didn't model stocks, but instead you basically just put money (without consideration of stock quantities) into the security account and it yields. The tax method gets called by the way twice, but also gets evaluated twice (why? is there no cache? is my approach wrong?) and takes up a lot of performance! Any feedback is much appreciated! Here is the code without further ado: -module(pension2). -export([totalBalance/1]). %% total of what you own: totalBalance(T) -> caBalance(T) + saBalance(T). %% balance of cash account - is constant since all money not spend is invested: caBalance(0) -> caBalanceStart(); caBalance(T) -> caBalance(T-1) + salary(T-1) - expenses(T-1) - tradingFees(T-1). %% balance of security account - rises while working - can rise while not working if yield > expenses saBalance(0) -> saBalanceStart(); saBalance(T) -> (saBalance(T-1)*yieldRate() + investments(T-1)). %% everything spend on securities - in case of not working fees and taxes are directly offset by sells tradingFees(T) -> case working(T) of true -> investments(T) + transactionFee(); false -> investments(T) + tax(expenses(T),T) + transactionFee() end. %% what is actually spend on stocks: investments(T) -> case working(T) of true-> salary(T) - expenses(T) - transactionFee(); false-> -expenses(T) - tax(expenses(T),T) - transactionFee() end. %% salary: salary(0) -> netIncome(); salary(T) -> case working(T) of true -> salary(T-1)*inflationRate(); false -> 0 end. %% expenses expenses(0) -> expensesStart(); expenses(T) -> expenses(T-1)*inflationRate(). %% still working? working(T) -> T 10. workingTimeUnits() -> 60. %%months caBalanceStart() -> 10000. saBalanceStart() -> 10000. netIncome() -> 3000. expensesStart() -> 2000. inflationRate() -> 1.0017. yieldRate() -> 1.004. taxRate() -> 0.2638. %% ------------HERE STARTS THE HEAVY LIFTING----------------- %% used to calc taxes payed on profits tax(X,T) -> profit(X,T,portfolio(T))*taxRate(). %% what is in the security account at time T: portfolio(T) -> case working(T) of true -> [{investments(X),X} || X <- lists:seq(0,T)]; false -> remove(investments(T-1),T,portfolio(T-1)) end. %% a helper method for removing positions from the portfolio remove(X,T,[{Amount,Time}|Tail]) -> case currentValue(Amount,T-Time) > X of true -> [{Amount-originalValue(X,T-Time),T}|Tail]; false -> remove(X-currentValue(Amount,T-Time),T,Tail) end. %% tells you what is the actual profit for an amount/revenue generated by selling at a time T profit(X,T,[{Amount,Time}|Tail]) -> case currentValue(Amount,T-Time) > X of true -> X - originalValue(X,T-Time); false -> X - (Amount + profit((X - currentValue(Amount,T-Time)),T,Tail)) end. %% helper functions for calculation the current and original value of a position currentValue(X,TD) -> X * math:pow(yieldRate(),TD). originalValue(X,TD) -> X / math:pow(yieldRate(),TD). From technion@REDACTED Tue Dec 15 06:14:54 2015 From: technion@REDACTED (Technion) Date: Tue, 15 Dec 2015 05:14:54 +0000 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program In-Reply-To: References: Message-ID: Hi, It's only a very minor suggestion, but you have a number of constants that could be replaced with compiler macros. For example: caBalanceStart() -> 10000. Becomes -define(CABALANCE, 10000). Then: caBalance(0) -> caBalanceStart(); Becomes: caBalance(0) -> ?CABALANCE ________________________________________ From: erlang-questions-bounces@REDACTED on behalf of Paul Wolf Sent: Tuesday, 15 December 2015 11:45 AM To: erlang-questions@REDACTED Subject: [erlang-questions] Feedback for my first non-trivial Erlang program Hi, I am totally new to Erlang and functional programming in general and tried to build a little program, after I read most of the "sequential programming" part of the Joe Armstrong book. While my program seems to work, I have still _big_ troubles in two departments: - my program is crazy slow as it seems to do a lot of redundant calculations. To me it seems functional and I don't really know how to solve this kind of (performance) issue - while I was able to program the logic in a mostly functional style, I had a hard time writing it and even some minutes after writing it, it is quite hard to read for me As I am totally knew to this stuff (while being an experienced Java developer/software engineer in general), I would highly appreciate some feedback to the following code: The programm basically takes some parameters (hardcoded... i know...) and tells you what your financial possesions are, if you save your money in stocks which in turn have an expected yield, etc., pp. The major part (first ~50 lines) seems fine to me and don't cause me much headache. However as soon as I start considering taxes (which in my country you pay on realized profits), stuff gets worriesome. Could you please review and comment the "tax" functions in particular (on a technical level - not functional)? For calculating the profit you have to keep track of when you bought how much and you have to sell your oldest positions first. For simplification I didn't model stocks, but instead you basically just put money (without consideration of stock quantities) into the security account and it yields. The tax method gets called by the way twice, but also gets evaluated twice (why? is there no cache? is my approach wrong?) and takes up a lot of performance! Any feedback is much appreciated! Here is the code without further ado: -module(pension2). -export([totalBalance/1]). %% total of what you own: totalBalance(T) -> caBalance(T) + saBalance(T). %% balance of cash account - is constant since all money not spend is invested: caBalance(0) -> caBalanceStart(); caBalance(T) -> caBalance(T-1) + salary(T-1) - expenses(T-1) - tradingFees(T-1). %% balance of security account - rises while working - can rise while not working if yield > expenses saBalance(0) -> saBalanceStart(); saBalance(T) -> (saBalance(T-1)*yieldRate() + investments(T-1)). %% everything spend on securities - in case of not working fees and taxes are directly offset by sells tradingFees(T) -> case working(T) of true -> investments(T) + transactionFee(); false -> investments(T) + tax(expenses(T),T) + transactionFee() end. %% what is actually spend on stocks: investments(T) -> case working(T) of true-> salary(T) - expenses(T) - transactionFee(); false-> -expenses(T) - tax(expenses(T),T) - transactionFee() end. %% salary: salary(0) -> netIncome(); salary(T) -> case working(T) of true -> salary(T-1)*inflationRate(); false -> 0 end. %% expenses expenses(0) -> expensesStart(); expenses(T) -> expenses(T-1)*inflationRate(). %% still working? working(T) -> T 10. workingTimeUnits() -> 60. %%months caBalanceStart() -> 10000. saBalanceStart() -> 10000. netIncome() -> 3000. expensesStart() -> 2000. inflationRate() -> 1.0017. yieldRate() -> 1.004. taxRate() -> 0.2638. %% ------------HERE STARTS THE HEAVY LIFTING----------------- %% used to calc taxes payed on profits tax(X,T) -> profit(X,T,portfolio(T))*taxRate(). %% what is in the security account at time T: portfolio(T) -> case working(T) of true -> [{investments(X),X} || X <- lists:seq(0,T)]; false -> remove(investments(T-1),T,portfolio(T-1)) end. %% a helper method for removing positions from the portfolio remove(X,T,[{Amount,Time}|Tail]) -> case currentValue(Amount,T-Time) > X of true -> [{Amount-originalValue(X,T-Time),T}|Tail]; false -> remove(X-currentValue(Amount,T-Time),T,Tail) end. %% tells you what is the actual profit for an amount/revenue generated by selling at a time T profit(X,T,[{Amount,Time}|Tail]) -> case currentValue(Amount,T-Time) > X of true -> X - originalValue(X,T-Time); false -> X - (Amount + profit((X - currentValue(Amount,T-Time)),T,Tail)) end. %% helper functions for calculation the current and original value of a position currentValue(X,TD) -> X * math:pow(yieldRate(),TD). originalValue(X,TD) -> X / math:pow(yieldRate(),TD). _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions From samuelrivas@REDACTED Tue Dec 15 09:36:01 2015 From: samuelrivas@REDACTED (Samuel) Date: Tue, 15 Dec 2015 09:36:01 +0100 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program In-Reply-To: References: Message-ID: > It's only a very minor suggestion, but you have a number of constants that could be replaced with compiler macros. For example: Even though that is actually common practice, I have always found difficult to see the benefit of it. Unless you want to pattern match against those macros, using functions makes your code simpler as you don't use the preprocessor (fewer concepts to manage). In general, try to avoid the preprocessor as much as possible. Simplicity is a benefit in itself, but by not using the preprocessor you reap a few practical benefits: * Those values are easily accessible from the shell when you are debugging (if you export them or compile with export all for example) * If you need to derive them in the future you have less to rewrite (unless you want to have macros that expand to functions, which is almost always bad idea) * If you ever need to share them, you can do it just exporting the functions instead of sharing an hrl file (which is also something to avoid as it adds compilation dependencies that you wouldn't have otherwise) Best -- Samuel From llbgurs@REDACTED Tue Dec 15 09:51:58 2015 From: llbgurs@REDACTED (linbo liao) Date: Tue, 15 Dec 2015 16:51:58 +0800 Subject: [erlang-questions] Erlang VM will open all socket in every thread? Message-ID: Hi, Our Erlang Application hit an high slab cache memory issue. Check the free command and slabtop command, the reason is that proc_inode_cache (means lots of inode in /proc) consume lots of memory. ** free command* $ free -m > total used free shared buffers cached > Mem: 3697 3300 397 0 362 118 > -/+ buffers/cache: 2819 877 > Swap: 0 0 0 > ** slabtop * Active / Total Objects (% used) : 5436406 / 5679810 (95.7%) > Active / Total Slabs (% used) : 234780 / 234780 (100.0%) > Active / Total Caches (% used) : 65 / 101 (64.4%) > Active / Total Size (% used) : 2135436.95K / 2161607.05K (98.8%) > Minimum / Average / Maximum Object : 0.01K / 0.38K / 8.00K > > OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME > 2609313 2609313 100% 0.19K 124253 21 497012K dentry > 2503826 2503826 100% 0.61K 96301 26 1540816K proc_inode_cache > And check the Erlang VM? it has 323 threads. An interesting thing, all thread will open almost same number of sockets. $ ls -lht /proc/13197/task/ |wc -l > 323 > $ ls -lht /proc/13197/task/13412/fd |wc -l > 4776 > $ ls -lht /proc/13197/task/13414/fd |wc -l > 4791 > $ ls -lht /proc/13197/task/13414/fd |head -10 > total 0 > lrwx------ 1 yunba users 64 Dec 15 16:40 4245 -> socket:[2103438292] > lrwx------ 1 yunba users 64 Dec 15 16:40 4248 -> socket:[2103411552] > lrwx------ 1 yunba users 64 Dec 15 16:40 4364 -> socket:[2097429782] > lrwx------ 1 yunba users 64 Dec 15 16:40 4609 -> socket:[2103438164] > lrwx------ 1 yunba users 64 Dec 15 16:40 4610 -> socket:[2103438165] > lrwx------ 1 yunba users 64 Dec 15 16:40 4612 -> socket:[2103438175] > lrwx------ 1 yunba users 64 Dec 15 16:40 4613 -> socket:[2103438176] > lrwx------ 1 yunba users 64 Dec 15 16:40 4614 -> socket:[2103438177] > lrwx------ 1 yunba users 64 Dec 15 16:40 4615 -> socket:[2103438178] > $ ls -lht /proc/13197/task/13414/fd |head -10 > total 0 > lrwx------ 1 yunba users 64 Dec 15 16:40 4248 -> socket:[2103411552] > lrwx------ 1 yunba users 64 Dec 15 16:40 4364 -> socket:[2097429782] > lrwx------ 1 yunba users 64 Dec 15 16:40 4630 -> socket:[2097238946] > lrwx------ 1 yunba users 64 Dec 15 16:40 4631 -> socket:[2097236427] > lrwx------ 1 yunba users 64 Dec 15 16:40 4632 -> socket:[2097236430] > lrwx------ 1 yunba users 64 Dec 15 16:40 4633 -> socket:[2097238953] > lrwx------ 1 yunba users 64 Dec 15 16:40 4635 -> socket:[2097238954] > lrwx------ 1 yunba users 64 Dec 15 16:40 4637 -> socket:[2097238956] > lrwx------ 1 yunba users 64 Dec 15 16:40 4639 -> socket:[2097238957] > $ ls -lht /proc/13197/task/*/fd |wc -l > 1524804 > If application operate a socket, will it be opened in every Erlang VM thread? Thanks, Linbo -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmkolesnikov@REDACTED Tue Dec 15 10:03:05 2015 From: dmkolesnikov@REDACTED (Dmitry Kolesnikov) Date: Tue, 15 Dec 2015 11:03:05 +0200 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program In-Reply-To: References: Message-ID: Hello, One quick suggestion to get rid of `case` -define(workingTimeUnits, 60). -define(is_working(X, Y), X < Y). investement(T) when ?is_working(T, ?workingTimeUnits) -> salary(T) - expenses(T) - transactionFee(); investement(T) -> -expenses(T) - tax(expenses(T),T) - transactionFee(). I think function level guards improves readability. However, you might implement independent ?path? to calculate meetings for working and non-working human beans. Best Regards, Dmitry > On Dec 15, 2015, at 2:45 AM, Paul Wolf wrote: > > Hi, > > I am totally new to Erlang and functional programming in general and > tried to build a little program, after I read most of the "sequential > programming" part of the Joe Armstrong book. While my program seems to > work, I have still _big_ troubles in two departments: > - my program is crazy slow as it seems to do a lot of redundant > calculations. To me it seems functional and I don't really know how to > solve this kind of (performance) issue > - while I was able to program the logic in a mostly functional style, > I had a hard time writing it and even some minutes after writing it, > it is quite hard to read for me > > As I am totally knew to this stuff (while being an experienced Java > developer/software engineer in general), I would highly appreciate > some feedback to the following code: > > The programm basically takes some parameters (hardcoded... i know...) > and tells you what your financial possesions are, if you save your > money in stocks which in turn have an expected yield, etc., pp. > The major part (first ~50 lines) seems fine to me and don't cause me > much headache. However as soon as I start considering taxes (which in > my country you pay on realized profits), stuff gets worriesome. Could > you please review and comment the "tax" functions in particular (on a > technical level - not functional)? For calculating the profit you have > to keep track of when you bought how much and you have to sell your > oldest positions first. For simplification I didn't model stocks, but > instead you basically just put money (without consideration of stock > quantities) into the security account and it yields. The tax method > gets called by the way twice, but also gets evaluated twice (why? is > there no cache? is my approach wrong?) and takes up a lot of > performance! > > Any feedback is much appreciated! Here is the code without further ado: > > -module(pension2). > > -export([totalBalance/1]). > > %% total of what you own: > totalBalance(T) -> caBalance(T) + saBalance(T). > > %% balance of cash account - is constant since all money not spend is invested: > caBalance(0) -> caBalanceStart(); > caBalance(T) -> caBalance(T-1) + salary(T-1) - expenses(T-1) - tradingFees(T-1). > > %% balance of security account - rises while working - can rise while > not working if yield > expenses > saBalance(0) -> saBalanceStart(); > saBalance(T) -> (saBalance(T-1)*yieldRate() + investments(T-1)). > > %% everything spend on securities - in case of not working fees and > taxes are directly offset by sells > tradingFees(T) -> > case working(T) of > true -> investments(T) + transactionFee(); > false -> investments(T) + tax(expenses(T),T) + transactionFee() > end. > > %% what is actually spend on stocks: > investments(T) -> > case working(T) of > true-> salary(T) - expenses(T) - transactionFee(); > false-> -expenses(T) - tax(expenses(T),T) - transactionFee() > end. > > %% salary: > salary(0) -> netIncome(); > salary(T) -> > case working(T) of > true -> salary(T-1)*inflationRate(); > false -> 0 > end. > > %% expenses > expenses(0) -> expensesStart(); > expenses(T) -> expenses(T-1)*inflationRate(). > > %% still working? > working(T) -> T > %% lots of constants - in respect to months > transactionFee() -> 10. > workingTimeUnits() -> 60. %%months > caBalanceStart() -> 10000. > saBalanceStart() -> 10000. > netIncome() -> 3000. > expensesStart() -> 2000. > inflationRate() -> 1.0017. > yieldRate() -> 1.004. > taxRate() -> 0.2638. > > %% ------------HERE STARTS THE HEAVY LIFTING----------------- > > %% used to calc taxes payed on profits > tax(X,T) -> profit(X,T,portfolio(T))*taxRate(). > > %% what is in the security account at time T: > portfolio(T) -> > case working(T) of > true -> [{investments(X),X} || X <- lists:seq(0,T)]; > false -> remove(investments(T-1),T,portfolio(T-1)) > end. > > %% a helper method for removing positions from the portfolio > remove(X,T,[{Amount,Time}|Tail]) -> > case currentValue(Amount,T-Time) > X of > true -> [{Amount-originalValue(X,T-Time),T}|Tail]; > false -> remove(X-currentValue(Amount,T-Time),T,Tail) > end. > > %% tells you what is the actual profit for an amount/revenue generated > by selling at a time T > profit(X,T,[{Amount,Time}|Tail]) -> > case currentValue(Amount,T-Time) > X of > true -> X - originalValue(X,T-Time); > false -> X - (Amount + profit((X - currentValue(Amount,T-Time)),T,Tail)) > end. > > %% helper functions for calculation the current and original value of a position > currentValue(X,TD) -> X * math:pow(yieldRate(),TD). > originalValue(X,TD) -> X / math:pow(yieldRate(),TD). > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From zxq9@REDACTED Tue Dec 15 10:21:47 2015 From: zxq9@REDACTED (zxq9) Date: Tue, 15 Dec 2015 18:21:47 +0900 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program In-Reply-To: References: Message-ID: <1838048.PgCCDcU252@changa> On 2015?12?15? ??? 05:14:54 Technion wrote: > Hi, > > It's only a very minor suggestion, but you have a number of constants that could be replaced with compiler macros. For example: > > caBalanceStart() -> 10000. > Becomes > -define(CABALANCE, 10000). > > Then: > caBalance(0) -> caBalanceStart(); > Becomes: > caBalance(0) -> ?CABALANCE Keep in mind, though, that constants are *almost never actually constant* throughout the service life of a program. Macros are a *bad* habit in this case. A function call can stand in for literally anything you might want to do -- including looking up a particular tax rate or whatever based on the circumstance of the program. (Anything you might want to do outside of guards, of course. The one place where it looks like a constant will actually be constant is "working time units", and this is a good candidate for macroization and elimination of some `case` statements.) In-module calls are almost certainly *not* a cause of significant slowdown in the program -- though it is possible that the arbitrarily large stack rollup in the non-tail recursive functions that litter the code are. That said, consider this: caBalance(0) -> caBalanceStart(); caBalance(T) -> caBalance(T-1) + salary(T-1) - expenses(T-1) - tradingFees(T-1). This leaves a frame per iteration on the stack. saBalance/1 does the same, as does salary/1 etc. These are not tail recursive and so leave a trail of values that have to be "rolled up" after the fact to give the final result. I'm not sure whether this is part of the performance problem or not, but regardless, you *really* want tail-recursive calls (where no return-to-caller is necessary, and so the stack can be eliminated). This has several advantages, in addition to lowering memory use (dramatically for high values for T), it also permits you to understand the *complete* state of a particular call only by examining a single call instead of the entire stack of calls (so you can start a test call in the middle if you want). ca_balance(0) -> ca_balance_start(); ca_balance(T) -> ca_balance(T, ca_balance_start()). ca_balance(0, A) -> A; ca_balance(T, A) -> NewT = T - 1, NewA = A + salary(NewT) - expenses(NewT) - trading_fees(NewT), ca_balance(NewT, NewA). This version doesn't have that problem. (Note also the lack of camelCaseFunctionNames() here. Capitals mean things in Erlang, don't confuse the issue by pretending its C++ or Java. Also, Erlang does not have methods, only functions.) The main problem with performance in the program, though, is constant re-computation of values you could already know after they've been computed once. The entire function works that way, actually. Regardless how long this particular version of the function takes to run, you could run it once on a very high value to build a memoized chart of values and retrieve them by key instead of recomputing all the time. Even without that, though, a significant amount of this functions time is spent computing intermediate values that could be memoized or bypassed on successive iterations by building up to T instead of counting down to it. Not to mention "next" values that are doubly or triply computed here and there, some of them rather complex: remove(X,T,[{Amount,Time}|Tail]) -> case currentValue(Amount,T-Time) > X of true -> [{Amount-originalValue(X,T-Time),T}|Tail]; false -> remove(X-currentValue(Amount,T-Time),T,Tail) end. could be remove(_, _, []) -> []; remove(X, T, [{Amount, Time} | Tail]) -> TLessTime = T - Time, CurrentOverTime = current_value(Amount, TLessTime), case CurrentOverTime > X of true -> [{Amount - original_value(X, TLessTime), T} | Tail]; false -> remove(X - CurrentOverTime, T, Tail) end. This way you don't compute `current_value(Amount, TLessTime)` twice (once to compare it, and once to pass it through if its =< X) and `T - Time` happens at least twice in all cases -- and you potentially compute remove/3 a *lot* (the insanity really starts with any call to `portfolio/1` that is larger than 61). I added an empty-list clause. I'm not certain that it will ever fit, but if you ever use this function in another context the probability is quite high that it may be passed an empty list. The same goes for negative values for your one exported function: total_balance(T) when < 0 -> {error, negative_time}; total_balance(T) -> caBalance(T) + saBalance(T). This might be a better way to make sure your program doesn't iterate forever because of externally generated bad data. The other thing about this function is... FLOAT VALUES. A version of this with tail recursive functions and the current one will probably return subtly (or not so subtly) different results, simply because of float carry value errors. Accumulated values and rolled up values tend to not turn out quite the same way. There is a *lot* of iterative arithmetic indicated here... This could be prevented a few ways -- by picking a global minimum fractional value and using integer operations (but then you have to watch out for hitting boundaries that always return peculiarly round value, especially 0), by using fixed-point arithmetic, or by creating a real type and reducing at the end. Some form of the first method is used in currency trade calculations, but different industries have different regulations about how to approach that. Something bothers me about this function: portfolio(T) -> case working(T) of true -> [{investments(X), X} || X <- lists:seq(0, T)]; false -> remove(investments(T - 1), T, portfolio(T - 1)) end. Its not just that it isn't tail recursive, though. Passing `portfolio(T - 1)` as the third argument to remove/3 makes this a bit more mysterious at first glance than I it should be. It is a computational complexity explosion that seems totally unnecessary. -Craig From zxq9@REDACTED Tue Dec 15 10:22:25 2015 From: zxq9@REDACTED (zxq9) Date: Tue, 15 Dec 2015 18:22:25 +0900 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program In-Reply-To: References: Message-ID: <1707107.KpppjOkQOS@changa> On 2015?12?15? ??? 11:03:05 Dmitry Kolesnikov wrote: > Hello, > > One quick suggestion to get rid of `case` > > -define(workingTimeUnits, 60). > -define(is_working(X, Y), X < Y). > > investement(T) > when ?is_working(T, ?workingTimeUnits) -> > salary(T) - expenses(T) - transactionFee(); > > investement(T) -> > -expenses(T) - tax(expenses(T),T) - transactionFee(). > > > I think function level guards improves readability. However, you might implement independent ?path? to calculate meetings for working and non-working human beans. > What is the advantage of that over this: -define(?working_time_units, 60). investment(T) when T < ?working_time_units -> % blahblah. ??? That sort of frivolous macroization drives me nuts. I totally agree with getting rid of case statements, but there are better ways -- and macroizing guards that rely on more macros is not it. That said, considering how investment is actually written, I think the case may be the lesser of two evils in terms of readability and multiply-calling the same function: investments(T) -> Expenses = expenses(T), case T < ?working_time_units of true -> salary(T) - Expenses - transaction_fee(); false -> -Expenses - tax(Expenses, T) - transaction_fee() end. or investment(T) when T < ?working_time_units -> salary(T) - expenses(T) - transaction_fee(); investment(T) -> Expenses = expenses(T), -Expenses - tax(Expenses, T) - transaction_fee(). I actually think the first version reads slightly better (juuust barely, if only that Expenses is a fixed symbol throughout now), but the second will be easier to glance at a trace and know exactly what is going on. -Craig From carlsson.richard@REDACTED Tue Dec 15 10:49:34 2015 From: carlsson.richard@REDACTED (Richard Carlsson) Date: Tue, 15 Dec 2015 10:49:34 +0100 Subject: [erlang-questions] Question about Erlang and Ada In-Reply-To: <566EA482.3080807@power.com.pl> References: <566C1F77.1050701@power.com.pl> <566EA482.3080807@power.com.pl> Message-ID: > I still insist that there is need for both: "let it crash" and "correct by > construction". You wouldn't want to let your fly-by-wire system controller > crash during landing, one meter above the runway. But you also wouldn't > want to build a correct feature poor in-flight entertainment system. > If restarting is fast enough (e.g. sub-millisecond), then yes, I do want the fly-by-wire system controller to crash and get back to a clean state, rather than make a poor guess at what to do to fix the problem, or lock up. The adage "let it crash" doesn't mean you're allowed to write sloppy incorrect code in a mission critical system. It just means that 1) Very few complicated programs are ever completely correct (both in implementation and specification) and complete (prepared to handle all situations that may occur in production), 2) When an unexpected error occurs, the code itself will typically not know what to do to correct the problem, and any attempts at doing so may just mask the problem or make things worse. It is then best to let it crash - under the assumption that you have a supervisor or heart that can restart the failed subsystem so that it can resume its work. (Or for offline systems, you can restart it manually after checking the logs.) Parts of the program state may need to be stored persistently (e.g. in Mnesia) in order to survive a restart - for a flight controller, the last known position and velocity would probably be good to have - but the more you make persistent, the greater the risk that a corrupted state will not be fixed by the restart. So yes, both are good to have, but don't trust "correct by construction" too much, and don't underestimate how many situations a clean quick restart can fix. /Richard C -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukas@REDACTED Tue Dec 15 11:06:29 2015 From: lukas@REDACTED (Lukas Larsson) Date: Tue, 15 Dec 2015 11:06:29 +0100 Subject: [erlang-questions] Erlang VM will open all socket in every thread? In-Reply-To: References: Message-ID: Hello, On Tue, Dec 15, 2015 at 9:51 AM, linbo liao wrote: > > If application operate a socket, will it be opened in every Erlang VM > thread? > > Reading about task in the manual page for proc(5) http://man7.org/linux/man-pages/man5/proc.5.html For attributes that are shared by all threads, the con? tents for each of the files under the task/[tid] subdirectories will be the same as in the corresponding file in the parent /proc/[pid] directory (e.g., in a multithreaded process, all of the task/[tid]/cwd files will have the same value as the /proc/[pid]/cwd file in the parent directory, since all of the threads in a process share a working directory). The fds is another example of a resource that is shared across tasks so all of them will be duplicated in procfs. The reason you are seeing different values for different tasks is most likely because your application is opening new sockets while you are running the commands. Lukas -------------- next part -------------- An HTML attachment was scrubbed... URL: From technion@REDACTED Tue Dec 15 11:47:26 2015 From: technion@REDACTED (Technion) Date: Tue, 15 Dec 2015 10:47:26 +0000 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program In-Reply-To: <1838048.PgCCDcU252@changa> References: , <1838048.PgCCDcU252@changa> Message-ID: I'd like to continue this thread by discussing benchmarks. It's important to work out what optimisations help, and which don't. timer:tc/1 is useful for this. I've run this work some test values here. It's interesting that the time taken increases at much more than a linear rate. 8> timer:tc(pension2, totalBalance, [10]). {30,30565.063527483566} 9> timer:tc(pension2, totalBalance, [20]). {81,41736.17177446989} 12> timer:tc(pension2, totalBalance, [60]). {840,93052.91160889174} 13> timer:tc(pension2, totalBalance, [70]). {559006,73294.80080230196} At 75, it ran so long I killed it. With that said, here is the some of the same bechmarks, after applying zxq9's tail call optimisation. 18> timer:tc(pension2, totalBalance, [10]). {28,30565.063527483566} 20> timer:tc(pension2, totalBalance, [30]). {141,53541.01834136172} 21> timer:tc(pension2, totalBalance, [60]). {823,93052.91160889175} 22> timer:tc(pension2, totalBalance, [70]). {562409,73294.80080230198} So there is definitely a measureable improvement there. You can also write tradingFees in a way that helps recursion, as investments/1 seems to be the bulk of the work: tradingFees(T) -> X = case working(T) of true -> transactionFee(); false -> tax(expenses(T),T) + transactionFee() end, X + investments(T). Further improvements noted (slight): 24> timer:tc(pension2, totalBalance, [10]). {26,30565.063527483566} 28> timer:tc(pension2, totalBalance, [70]). {560813,73294.80080230198} I don't understand what happened with remove/1, but the optimised solution seemed to slow things down in the high cases, but had notable improvements at the low end: 37> timer:tc(pension2, totalBalance, [10]). {28,30565.063527483566} 41> timer:tc(pension2, totalBalance, [30]). {246,53541.01834136172} 39> timer:tc(pension2, totalBalance, [60]). {701,93052.91160889175} 38> timer:tc(pension2, totalBalance, [70]). {581899,73294.80080230198} All in all, there's a number of optimisations to be made, but I can see using this app with an input > 100 to still be painful. Is this algorithm documented in any other language anywhere? I'm curious as to whether there's a bigger picture being missed here. I would also propose - writing some unit tests. It would be disastrous to "optimise" this code in such a way that the output changes. A few unit tests should address that. Finally, I would like to suggest using github gists for this much code. Copy+pasting it from email lost most of the formatting and made it painful to read/manipulate. From technion@REDACTED Tue Dec 15 12:12:24 2015 From: technion@REDACTED (Technion) Date: Tue, 15 Dec 2015 11:12:24 +0000 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program In-Reply-To: References: , <1838048.PgCCDcU252@changa>, Message-ID: Sorry for the spam but continuing the theme further.. The next thing we can do is profile the function. I've run eprof here: 43> eprof:profile(fun() -> pension2:totalBalance(20) end). {ok,41736.1717744699} 44> eprof:analyze(). ****** Process <0.111.0> -- 100.00 % of profiled time *** FUNCTION CALLS % TIME [uS / CALLS] -------- ----- ------- ---- [----------] orddict:new/0 1 0.01 1 [ 1.00] erl_eval:do_apply/6 1 0.01 1 [ 1.00] erl_eval:eval_fun/2 1 0.01 1 [ 1.00] erl_eval:guard0/4 1 0.01 1 [ 1.00] pension2:saBalanceStart/0 1 0.01 1 [ 1.00] shell:apply_fun/3 1 0.01 1 [ 1.00] erl_eval:exprs/5 1 0.02 2 [ 2.00] erl_eval:eval_fun/6 1 0.02 2 [ 2.00] erl_eval:expr_list/6 2 0.02 2 [ 1.00] erl_eval:add_bindings/2 1 0.02 2 [ 2.00] erl_eval:'-expr/5-fun-3-'/1 1 0.02 2 [ 2.00] pension2:totalBalance/1 1 0.02 2 [ 2.00] pension2:ca_balance/1 1 0.02 2 [ 2.00] pension2:caBalanceStart/0 1 0.02 2 [ 2.00] shell:'-eval_loop/3-fun-0-'/3 1 0.02 2 [ 2.00] erl_eval:ret_expr/3 3 0.03 3 [ 1.00] erl_eval:guard/4 1 0.03 3 [ 3.00] erl_eval:match_list/4 1 0.03 3 [ 3.00] erl_eval:new_bindings/0 1 0.03 3 [ 3.00] lists:reverse/1 1 0.03 3 [ 3.00] erl_internal:bif/3 1 0.03 3 [ 3.00] erl_eval:expr_list/4 1 0.04 4 [ 4.00] lists:foldl/3 3 0.04 4 [ 1.33] erl_eval:merge_bindings/2 2 0.06 6 [ 3.00] orddict:to_list/1 3 0.08 8 [ 2.67] erlang:apply/2 1 0.09 9 [ 9.00] erl_eval:expr/5 4 0.13 12 [ 3.00] pension2:ca_balance/2 21 0.29 28 [ 1.33] pension2:yieldRate/0 20 0.50 48 [ 2.40] pension2:tradingFees/1 20 0.51 49 [ 2.45] pension2:saBalance/1 21 0.56 53 [ 2.52] pension2:netIncome/0 60 0.75 71 [ 1.18] pension2:expensesStart/0 60 0.77 73 [ 1.22] pension2:investments/1 40 1.05 100 [ 2.50] pension2:transactionFee/0 60 1.47 140 [ 2.33] pension2:expenses/1 630 16.03 1528 [ 2.43] pension2:salary/1 630 16.17 1541 [ 2.45] pension2:working/1 630 16.38 1561 [ 2.48] pension2:workingTimeUnits/0 630 16.40 1563 [ 2.48] pension2:inflationRate/0 1140 28.23 2690 [ 2.36] ----------------------------- ----- ------- ---- [----------] Total: 4000 100.00% 9530 [ 2.38] I never bought the macros into the benchmarks because my suggestion was supposed to be purely a readability thing, but given what we see above, I've rewritten workingTimeUnits/0 and inflationRate/0 into macros. There definitely seemed to be an improvement: 58> timer:tc(pension2, totalBalance, [10]). {24,30565.063527483566} 60> timer:tc(pension2, totalBalance, [60]). {390,93052.91160889175} 61> timer:tc(pension2, totalBalance, [70]). {444312,73294.80080230198} And now replacing working/1 with T timer:tc(pension2, totalBalance, [70]). {310477,73294.80080230198} 72> timer:tc(pension2, totalBalance, [60]). {291,93052.91160889175} That's a significant improvement from where we started. eprof now puts salary/1 and expenses/1 as using 45% and 48% of the total algorithm time at this point. So the question becomes, can these be written in a diferent way? ________________________________________ From: erlang-questions-bounces@REDACTED on behalf of Technion Sent: Tuesday, 15 December 2015 9:47 PM To: zxq9; erlang-questions@REDACTED Subject: Re: [erlang-questions] Feedback for my first non-trivial Erlang program I'd like to continue this thread by discussing benchmarks. It's important to work out what optimisations help, and which don't. timer:tc/1 is useful for this. I've run this work some test values here. It's interesting that the time taken increases at much more than a linear rate. 8> timer:tc(pension2, totalBalance, [10]). {30,30565.063527483566} 9> timer:tc(pension2, totalBalance, [20]). {81,41736.17177446989} 12> timer:tc(pension2, totalBalance, [60]). {840,93052.91160889174} 13> timer:tc(pension2, totalBalance, [70]). {559006,73294.80080230196} At 75, it ran so long I killed it. With that said, here is the some of the same bechmarks, after applying zxq9's tail call optimisation. 18> timer:tc(pension2, totalBalance, [10]). {28,30565.063527483566} 20> timer:tc(pension2, totalBalance, [30]). {141,53541.01834136172} 21> timer:tc(pension2, totalBalance, [60]). {823,93052.91160889175} 22> timer:tc(pension2, totalBalance, [70]). {562409,73294.80080230198} So there is definitely a measureable improvement there. You can also write tradingFees in a way that helps recursion, as investments/1 seems to be the bulk of the work: tradingFees(T) -> X = case working(T) of true -> transactionFee(); false -> tax(expenses(T),T) + transactionFee() end, X + investments(T). Further improvements noted (slight): 24> timer:tc(pension2, totalBalance, [10]). {26,30565.063527483566} 28> timer:tc(pension2, totalBalance, [70]). {560813,73294.80080230198} I don't understand what happened with remove/1, but the optimised solution seemed to slow things down in the high cases, but had notable improvements at the low end: 37> timer:tc(pension2, totalBalance, [10]). {28,30565.063527483566} 41> timer:tc(pension2, totalBalance, [30]). {246,53541.01834136172} 39> timer:tc(pension2, totalBalance, [60]). {701,93052.91160889175} 38> timer:tc(pension2, totalBalance, [70]). {581899,73294.80080230198} All in all, there's a number of optimisations to be made, but I can see using this app with an input > 100 to still be painful. Is this algorithm documented in any other language anywhere? I'm curious as to whether there's a bigger picture being missed here. I would also propose - writing some unit tests. It would be disastrous to "optimise" this code in such a way that the output changes. A few unit tests should address that. Finally, I would like to suggest using github gists for this much code. Copy+pasting it from email lost most of the formatting and made it painful to read/manipulate. _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions From llbgurs@REDACTED Tue Dec 15 12:25:21 2015 From: llbgurs@REDACTED (linbo liao) Date: Tue, 15 Dec 2015 19:25:21 +0800 Subject: [erlang-questions] Erlang VM will open all socket in every thread? In-Reply-To: References: Message-ID: Thanks Lukas. So this is not the reason why proc_inode_cache consume high memory ? Thanks, Linbo 2015-12-15 18:06 GMT+08:00 Lukas Larsson : > Hello, > > On Tue, Dec 15, 2015 at 9:51 AM, linbo liao wrote: > >> >> If application operate a socket, will it be opened in every Erlang VM >> thread? >> >> > Reading about task in the manual page for proc(5) > http://man7.org/linux/man-pages/man5/proc.5.html > > For attributes that are shared by all threads, the con? > tents for each of the files under the task/[tid] subdirectories > will be the same as in the corresponding file in the parent > /proc/[pid] directory (e.g., in a multithreaded process, all of > the task/[tid]/cwd files will have the same value as the > /proc/[pid]/cwd file in the parent directory, since all of the > threads in a process share a working directory). > > The fds is another example of a resource that is shared across tasks so > all of them will be duplicated in procfs. The reason you are seeing > different values for different tasks is most likely because your > application is opening new sockets while you are running the commands. > > Lukas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukas@REDACTED Tue Dec 15 13:48:42 2015 From: lukas@REDACTED (Lukas Larsson) Date: Tue, 15 Dec 2015 13:48:42 +0100 Subject: [erlang-questions] Erlang VM will open all socket in every thread? In-Reply-To: References: Message-ID: On Tue, Dec 15, 2015 at 12:25 PM, linbo liao wrote: > So this is not the reason why proc_inode_cache consume high memory ? > I don't know what could cause high proc_inode_cache memory usage. I'm pretty sure though that seeing the same socket in multiple tasks is the expected behavior. Lukas -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Tue Dec 15 13:51:03 2015 From: zxq9@REDACTED (zxq9) Date: Tue, 15 Dec 2015 21:51:03 +0900 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program In-Reply-To: References: Message-ID: <3544057.cvUYeKDRV2@changa> On 2015?12?15? ??? 11:12:24 Technion wrote: > > So there is definitely a measureable improvement there. The more fundamental problem, though, is that this is a compounding function that steps singly as it increments, but it is written in a way that it counts down, not up, computing the *entire* value chain below it all over again each step, and within each sub-step once the input value is over 60. Its an arithmetic fun-bomb! If you remove most of the intermediate functions (I think most but maybe profit/3 can be removed... ?) and compute the stepping values as arguments to an expanded defintion of total_balance this should run close to O(n). Refactoring for that may also expose some bugs... -Craig From mononcqc@REDACTED Tue Dec 15 15:28:06 2015 From: mononcqc@REDACTED (Fred Hebert) Date: Tue, 15 Dec 2015 09:28:06 -0500 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program In-Reply-To: <3544057.cvUYeKDRV2@changa> References: <3544057.cvUYeKDRV2@changa> Message-ID: <20151215142805.GA36571@fhebert-ltm1> On 12/15, zxq9 wrote: >The more fundamental problem, though, is that this is a compounding >function that steps singly as it increments, but it is written in a way >that it counts down, not up, computing the *entire* value chain below >it all over again each step, and within each sub-step once the input >value is over 60. > >Its an arithmetic fun-bomb! > >If you remove most of the intermediate functions (I think most but >maybe profit/3 can be removed... ?) and compute the stepping values as >arguments to an expanded defintion of total_balance this should run >close to O(n). > >Refactoring for that may also expose some bugs... This is the true problem with this code. It's calling itself in so many ways the whole thing explodes combinationally and yields exponential increases in time. One quick way to work around that is through memoization with a macro such as: %% memoization uses a process dictionary. While hackish and not portable %% through processes, a PD of this kind has the advantage of being %% garbage collected and being returned in a stack trace. There is also no %% copying (see: comments over the macros) or locking needed. -define(memoize(E), lazy_memoize(fun()-> E end)). lazy_memoize(F) when is_function(F) -> case erlang:get(F) of undefined -> erlang:put(F,F()), erlang:get(F); X -> X end. Once that's done, wrap up the costly functions in ?memoize(Expression) and the value will be stored in a process dictionary. I've done it a bit haphazardly, memoizing stuff at a glance that looked recursive: https://gist.github.com/ferd/ab5fed3b8ffe4b226755 While the raw implementation stalled at ~80 iterations and became impossible to run, memoization makes it far more workable: 1> c(pension2). {ok,pension2} 2> timer:tc(pension2, totalBalance, [8000]). {354750,-5.145757524814171e19} 3> timer:tc(pension2, totalBalance, [8000]). {11,-5.145757524814171e19} Every follow-up call is then cheaper. Of course, the additional cost comes in memory: 4> erlang:process_info(self(), dictionary). {dictionary,[{#Fun,0}, {#Fun,-1325845925.5710535}, {#Fun,0.0}, {#Fun,16858.175941786}, {#Fun,-4116.625402543886}, {...}|...]} 5> length(element(2,erlang:process_info(self(), dictionary))). 103703 6> erlang_term:byte_size(erlang:process_info(self(), dictionary)). 66125376 (the last call here uses https://github.com/okeuday/erlang_term). That's 66mb of storage for these terms in the process dictionary at the very least (for 8000 iterations) and the host OS reports ~100mb usage for the whole VM. Rewriting things to repeat fewer of these operations (building up from 0 to N, rather than N down to 0 with a lot of repetitions) would probably save memory a whole lot. If refactoring is out of the question and memory is cheap to you, then memoization is a quick way to get something out of it. Be mindful however that when memoizing in the process dictionary, every process that runs the operation will end up carrying that data. Moving it to ETS if the calls have to be distributed is an option, after a while it becomes very cheap. Flushing the cache from time to time (like a manual GC) is also a thing one may want to do. Ultimately refactoring, if possible, will prove the best long-term solution. Regards, Fred. From lloyd@REDACTED Tue Dec 15 15:55:17 2015 From: lloyd@REDACTED (Lloyd R. Prentice) Date: Tue, 15 Dec 2015 09:55:17 -0500 Subject: [erlang-questions] Mnesia best practices - a meta question Message-ID: <03ECD388-986F-4BFF-8D57-2891977696BE@writersglen.com> I've zeroed in on mnesia as the backend for the web application I'm developing for indie author/publishers. I've searched and read everything I can find on implementing and managing mnesia databases, including Klacke's fine manual. But I'm still left with many questions--- mostly arising from the fact that my users need to maintain and manage a large and diverse collection of data. I've identified more than fifty tables. I wish I had the knowledge and experience to write a mnesia best practices manual, but I don't. Thus, my meta question: Would it be more appropriate to bundle all my questions up in one post? Or to ask them in separate posts? Many thanks, LRP Sent from my iPad From aseigo@REDACTED Tue Dec 15 19:38:22 2015 From: aseigo@REDACTED (Aaron J. Seigo) Date: Tue, 15 Dec 2015 19:38:22 +0100 Subject: [erlang-questions] eimap: a little Erlang IMAP client Message-ID: <24172332.DPFkGknGCC@serenity> Hello, We began using Erlang where I work[1] for certain new components in our flagship product[2] which focuses on collaboration and groupware functionality. As such, IMAP is a protocol one sees flying about on a regular basis. So, after looking in vain for an IMAP client library written in Erlang, I have started to implement one. Say hello to eimap. It is implemented as a gen_fsm with each of the commands implemented in its own module which makes extending / adding the myriad of silly IMAP commands that float about in the various RFCs[4] rather easy. It is designed to be used by one or more Erlang processes at a time, has a passthrough mode in addition to a command queue, tries to be very forgiving in usage (e.g. you can start queueing commands before it has connected to the server), ... 0.1 was a fairly rough-and-ready early release, and certainly quite limited in functionality ... but good enough to be used in the first release of a new Kolab component[3]. Release early and often, right? I've done some refactoring this week and added a number of new commands in preparation for releasing 0.2 this weekend. Should (one hopes! :) be an improvement over the first release. Feedback, patches, questions, pointers, etc ... all quite welcome. The project page on our Phabricator instance is here: https://git.kolab.org/project/profile/106/ git repo: https://git.kolab.org/diffusion/EI/ workboard: https://git.kolab.org/project/board/106/ Cheers! [1] https://kolabsystems.com [2] https://kolabenterprise.com via https://kolab.org [3] https://exote.ch/blogs/aseigo/2015/12/10/guam-an-imap-session-filterproxy/ [4] http://www.imapwiki.org/ImapRFCList -- Aaron Seigo From mrallen1@REDACTED Tue Dec 15 21:30:08 2015 From: mrallen1@REDACTED (Mark Allen) Date: Tue, 15 Dec 2015 20:30:08 +0000 (UTC) Subject: [erlang-questions] lager changelog? In-Reply-To: References: Message-ID: <1452939788.1838505.1450211408955.JavaMail.yahoo@mail.yahoo.com> There is no official changelog but as far as breaking API changes from 2.x to 3.x the big one is around how traces work (or don't) with multiple sinks. ? My expectation is that using lager 3 on a project that previously used 2.x should Just Work. ?If it doesn't please open an issue on the repo - because that's definitely something I'd like to know about. Thanks. Mark On Friday, December 11, 2015 4:10 PM, Technion wrote: Hi, Given that the Lager readme is pretty good, you can get a pretty good answer on this by reviewing changes to the readme. $ git clone https://github.com/basho/lager.git $ cd lager $ git diff 2.0.0rc2 3.0.2 -- README.md The diff will include any API changes. ________________________________________ From: erlang-questions-bounces@REDACTED on behalf of Siraaj Khandkar Sent: Saturday, 12 December 2015 7:15 AM To: erlang-questions@REDACTED Subject: [erlang-questions] lager changelog? Is anyone aware of anything like a changelog for lager? I did not see anything obvious at https://github.com/basho/lager More-specifically, I inherited a project which uses 2.0.0rc1 and am wondering what surprises and API changes await me if I wanted to upgrade. _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.pailleau@REDACTED Tue Dec 15 21:31:15 2015 From: eric.pailleau@REDACTED (PAILLEAU Eric) Date: Tue, 15 Dec 2015 21:31:15 +0100 Subject: [erlang-questions] [ANN] geas 2.0.2 Message-ID: <56707893.1090803@wanadoo.fr> Hi list , following my 25/11 alpha version announcement, I hereby announcing first stable release of geas . Geas is now available as plugin in your preferred build tool, erlang.mk or rebar. Simple as : $> make geas or $> rebar geas https://github.com/crownedgrouse/geas Documentation is updated for plugin usage. Enjoy ! Best regards. From chandrashekhar.mullaparthi@REDACTED Tue Dec 15 23:52:46 2015 From: chandrashekhar.mullaparthi@REDACTED (Chandru) Date: Tue, 15 Dec 2015 22:52:46 +0000 Subject: [erlang-questions] Mnesia best practices - a meta question In-Reply-To: <03ECD388-986F-4BFF-8D57-2891977696BE@writersglen.com> References: <03ECD388-986F-4BFF-8D57-2891977696BE@writersglen.com> Message-ID: On 15 December 2015 at 14:55, Lloyd R. Prentice wrote: > I've zeroed in on mnesia as the backend for the web application I'm > developing for indie author/publishers. I've searched and read everything I > can find on implementing and managing mnesia databases, including Klacke's > fine manual. But I'm still left with many questions--- mostly arising from > the fact that my users need to maintain and manage a large and diverse > collection of data. I've identified more than fifty tables. > > I wish I had the knowledge and experience to write a mnesia best practices > manual, but I don't. Thus, my meta question: > > Would it be more appropriate to bundle all my questions up in one post? Or > to ask them in separate posts? > I would suggest starting with a few fundamental questions. The answers you get to those questions might influence your future questions! I have used mnesia for more use cases than most sane people recommend doing, so I'll do what I can to help. Chandru -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugo@REDACTED Wed Dec 16 00:41:47 2015 From: hugo@REDACTED (Hugo Mills) Date: Tue, 15 Dec 2015 23:41:47 +0000 Subject: [erlang-questions] Deploying multiple webapps Message-ID: <20151215234147.GK26782@carfax.org.uk> I've got a collection of small services, with minimal coupling between the back ends of those services (orchestration is done mostly client-side). I'd like to put an HTTPS interface in front of each one -- say, with cowboy. What I'd also like to be able to do, at least in principle, is deploy some arbitrary subset of those services on each machine in my (comedically-named) server farm. I'd like to be able to do this with one TLS configuration, and preferably under a single port. i.e., access my services through https://server.me/service1/... https://server.me/service2/... https://server.me/service3/... Now, in python-land, which is largely where I come from, I'd set up Apache with mod-wsgi, and deploy each WSGI app to a specific URL within the same URL namespace. I'm not quite sure how to do that easily with erlang+cowboy, because there seems to be no easy way of treating a webapp as a unit within a larger server configuration. I keep coming to one of two approaches: 1) Write each service completely independently (as HTTP), run it on a distinct port, and splice together the URL namespaces through a reverse proxy on a "normal" web server like Apache. 2) Find some way to automatically write a top-level router for cowboy, for each set of services that I want to deploy to a machine. I don't much like option 1, but I like option 2 even less. I guess I could write some kind of "top-level" app that, given a bunch of webapp modules (via a configuration file of some kind), gets a router for each module and transforms those routers into a single router config. Does such a thing already exist? It all just feels a bit awkward, and I feel like I'm missing something. What do other people do to put together this kind of setup? Hugo. -- Hugo Mills | I spent most of my money on drink, women and fast hugo@REDACTED carfax.org.uk | cars. The rest I wasted. http://carfax.org.uk/ | PGP: E2AB1DE4 | James Hunt -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: Digital signature URL: From chandrashekhar.mullaparthi@REDACTED Wed Dec 16 00:57:56 2015 From: chandrashekhar.mullaparthi@REDACTED (Chandru) Date: Tue, 15 Dec 2015 23:57:56 +0000 Subject: [erlang-questions] Deploying multiple webapps In-Reply-To: <20151215234147.GK26782@carfax.org.uk> References: <20151215234147.GK26782@carfax.org.uk> Message-ID: Hi Hugo, Why does all this have to be done in Erlang? It sounds like your best bet is to use something like nginx/varnish/haproxy (or even Apache as you explained) to front your server farm. You can get that component to then rewrite the URLs and route requests to wherever your Erlang web services are located. I would do that rather than trying to do everything in Erlang. cheers, Chandru On 15 December 2015 at 23:41, Hugo Mills wrote: > I've got a collection of small services, with minimal coupling > between the back ends of those services (orchestration is done mostly > client-side). I'd like to put an HTTPS interface in front of each one > -- say, with cowboy. > > What I'd also like to be able to do, at least in principle, is > deploy some arbitrary subset of those services on each machine in my > (comedically-named) server farm. I'd like to be able to do this with > one TLS configuration, and preferably under a single port. > > i.e., access my services through > > https://server.me/service1/... > https://server.me/service2/... > https://server.me/service3/... > > Now, in python-land, which is largely where I come from, I'd set up > Apache with mod-wsgi, and deploy each WSGI app to a specific URL > within the same URL namespace. I'm not quite sure how to do that > easily with erlang+cowboy, because there seems to be no easy way of > treating a webapp as a unit within a larger server configuration. I > keep coming to one of two approaches: > > 1) Write each service completely independently (as HTTP), run it on a > distinct port, and splice together the URL namespaces through a > reverse proxy on a "normal" web server like Apache. > > 2) Find some way to automatically write a top-level router for cowboy, > for each set of services that I want to deploy to a machine. > > I don't much like option 1, but I like option 2 even less. I guess > I could write some kind of "top-level" app that, given a bunch of > webapp modules (via a configuration file of some kind), gets a router > for each module and transforms those routers into a single router > config. Does such a thing already exist? > > It all just feels a bit awkward, and I feel like I'm missing > something. What do other people do to put together this kind of setup? > > Hugo. > > -- > Hugo Mills | I spent most of my money on drink, women and fast > hugo@REDACTED carfax.org.uk | cars. The rest I wasted. > http://carfax.org.uk/ | > PGP: E2AB1DE4 | James > Hunt > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugo@REDACTED Wed Dec 16 01:10:28 2015 From: hugo@REDACTED (Hugo Mills) Date: Wed, 16 Dec 2015 00:10:28 +0000 Subject: [erlang-questions] Deploying multiple webapps In-Reply-To: References: <20151215234147.GK26782@carfax.org.uk> Message-ID: <20151216001028.GL26782@carfax.org.uk> Hi, Chandru, On Tue, Dec 15, 2015 at 11:57:56PM +0000, Chandru wrote: > Why does all this have to be done in Erlang? > > It sounds like your best bet is to use something like nginx/varnish/haproxy > (or even Apache as you explained) to front your server farm. You can get > that component to then rewrite the URLs and route requests to wherever your > Erlang web services are located. I would do that rather than trying to do > everything in Erlang. Thanks for the advice. I guess I'm unhappy (probably with no good reason) with the idea of running each service in a separate erlang VM, and each one running on a separate port, and having to ensure that those ports aren't visible outside the machine (because they'll be running HTTP, not the desired HTTPS). Those are probably all relatively minor considerations in the grand scheme of things, though. I shall sleep on it and see if I can face my fears, and see what other advice people have to offer. Hugo. > cheers, > Chandru > > > On 15 December 2015 at 23:41, Hugo Mills wrote: > > > I've got a collection of small services, with minimal coupling > > between the back ends of those services (orchestration is done mostly > > client-side). I'd like to put an HTTPS interface in front of each one > > -- say, with cowboy. > > > > What I'd also like to be able to do, at least in principle, is > > deploy some arbitrary subset of those services on each machine in my > > (comedically-named) server farm. I'd like to be able to do this with > > one TLS configuration, and preferably under a single port. > > > > i.e., access my services through > > > > https://server.me/service1/... > > https://server.me/service2/... > > https://server.me/service3/... > > > > Now, in python-land, which is largely where I come from, I'd set up > > Apache with mod-wsgi, and deploy each WSGI app to a specific URL > > within the same URL namespace. I'm not quite sure how to do that > > easily with erlang+cowboy, because there seems to be no easy way of > > treating a webapp as a unit within a larger server configuration. I > > keep coming to one of two approaches: > > > > 1) Write each service completely independently (as HTTP), run it on a > > distinct port, and splice together the URL namespaces through a > > reverse proxy on a "normal" web server like Apache. > > > > 2) Find some way to automatically write a top-level router for cowboy, > > for each set of services that I want to deploy to a machine. > > > > I don't much like option 1, but I like option 2 even less. I guess > > I could write some kind of "top-level" app that, given a bunch of > > webapp modules (via a configuration file of some kind), gets a router > > for each module and transforms those routers into a single router > > config. Does such a thing already exist? > > > > It all just feels a bit awkward, and I feel like I'm missing > > something. What do other people do to put together this kind of setup? > > > > Hugo. > > -- Hugo Mills | Anyone who claims their cryptographic protocol is hugo@REDACTED carfax.org.uk | secure is either a genius or a fool. Given the http://carfax.org.uk/ | genius/fool ratio for our species, the odds aren't PGP: E2AB1DE4 | good. Bruce Schneier -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: Digital signature URL: From chandrashekhar.mullaparthi@REDACTED Wed Dec 16 01:17:50 2015 From: chandrashekhar.mullaparthi@REDACTED (Chandru) Date: Wed, 16 Dec 2015 00:17:50 +0000 Subject: [erlang-questions] Deploying multiple webapps In-Reply-To: <20151216001028.GL26782@carfax.org.uk> References: <20151215234147.GK26782@carfax.org.uk> <20151216001028.GL26782@carfax.org.uk> Message-ID: On 16 December 2015 at 00:10, Hugo Mills wrote: > Hi, Chandru, > > On Tue, Dec 15, 2015 at 11:57:56PM +0000, Chandru wrote: > > Why does all this have to be done in Erlang? > > > > It sounds like your best bet is to use something like > nginx/varnish/haproxy > > (or even Apache as you explained) to front your server farm. You can get > > that component to then rewrite the URLs and route requests to wherever > your > > Erlang web services are located. I would do that rather than trying to do > > everything in Erlang. > > Thanks for the advice. > > I guess I'm unhappy (probably with no good reason) with the idea of > running each service in a separate erlang VM, and each one running on > a separate port, and having to ensure that those ports aren't visible > outside the machine (because they'll be running HTTP, not the desired > HTTPS). > Sorry, I don't think I explained clearly. My point was that you can run multiple services in a single VM, on a single port. When your HTTPS request hits your front-end (nginx/varnish/apache/whatever), you get it to do two things. - Handle all the TLS stuff - Rewrite the URL in the request. (If the request from the client is http://server.me/service1, rewrite it to http://internal.server1.me/service1, http://server.me/service2 becomes http://internal.server1.me/service2) - You can have a global cowboy handler (one module which is used in all your backend erlang nodes) which provides internal routing for all your services. At its most basic form, routing in cowboy is redirecting requests based on URL to a module. So you just have to make sure this module is common across all your erlang nodes, regardless of how you distribute your services across nodes. What am I missing here? Chandru > > > > On 15 December 2015 at 23:41, Hugo Mills wrote: > > > > > I've got a collection of small services, with minimal coupling > > > between the back ends of those services (orchestration is done mostly > > > client-side). I'd like to put an HTTPS interface in front of each one > > > -- say, with cowboy. > > > > > > What I'd also like to be able to do, at least in principle, is > > > deploy some arbitrary subset of those services on each machine in my > > > (comedically-named) server farm. I'd like to be able to do this with > > > one TLS configuration, and preferably under a single port. > > > > > > i.e., access my services through > > > > > > https://server.me/service1/... > > > https://server.me/service2/... > > > https://server.me/service3/... > > > > > > Now, in python-land, which is largely where I come from, I'd set up > > > Apache with mod-wsgi, and deploy each WSGI app to a specific URL > > > within the same URL namespace. I'm not quite sure how to do that > > > easily with erlang+cowboy, because there seems to be no easy way of > > > treating a webapp as a unit within a larger server configuration. I > > > keep coming to one of two approaches: > > > > > > 1) Write each service completely independently (as HTTP), run it on a > > > distinct port, and splice together the URL namespaces through a > > > reverse proxy on a "normal" web server like Apache. > > > > > > 2) Find some way to automatically write a top-level router for cowboy, > > > for each set of services that I want to deploy to a machine. > > > > > > I don't much like option 1, but I like option 2 even less. I guess > > > I could write some kind of "top-level" app that, given a bunch of > > > webapp modules (via a configuration file of some kind), gets a router > > > for each module and transforms those routers into a single router > > > config. Does such a thing already exist? > > > > > > It all just feels a bit awkward, and I feel like I'm missing > > > something. What do other people do to put together this kind of setup? > > > > > > Hugo. > > > > > -- > Hugo Mills | Anyone who claims their cryptographic protocol is > hugo@REDACTED carfax.org.uk | secure is either a genius or a fool. Given the > http://carfax.org.uk/ | genius/fool ratio for our species, the odds > aren't > PGP: E2AB1DE4 | good. Bruce > Schneier > -------------- next part -------------- An HTML attachment was scrubbed... URL: From essen@REDACTED Wed Dec 16 01:21:09 2015 From: essen@REDACTED (=?UTF-8?Q?Lo=c3=afc_Hoguin?=) Date: Wed, 16 Dec 2015 01:21:09 +0100 Subject: [erlang-questions] Deploying multiple webapps In-Reply-To: <20151215234147.GK26782@carfax.org.uk> References: <20151215234147.GK26782@carfax.org.uk> Message-ID: <5670AE75.8000300@ninenines.eu> Hello, I have been working on RabbitMQ these past few months, and one of the tasks I got assigned was to switch the Web components from Webmachine to Cowboy. RabbitMQ already had a way to have different services on different URLs running under the same port, as you describe. So my work was in part to make it work with Cowboy. I didn't have to change much. I made a tiny middleware that queries the RabbitMQ component keeping tracks of all the services, that then returns with the 'dispatch' environment variable added. This middleware runs just before the cowboy_router middleware: https://github.com/rabbitmq/rabbitmq-web-dispatch/blob/rabbitmq-management-63/src/rabbit_cowboy_middleware.erl#L27 The process keeping tracks of all services simply has a mapping from /service1/ to the service's dispatch list (/service1/ is added dynamically). This works pretty well, is all on one node, one port, just like you need, and doesn't require much code. I suppose it wouldn't be too difficult to extract and make it its own project, if that's something people need. Note that everything I talk about here has not been merged yet; but I'm getting close to completion (all tests pass, yada yada). Cheers, On 12/16/2015 12:41 AM, Hugo Mills wrote: > I've got a collection of small services, with minimal coupling > between the back ends of those services (orchestration is done mostly > client-side). I'd like to put an HTTPS interface in front of each one > -- say, with cowboy. > > What I'd also like to be able to do, at least in principle, is > deploy some arbitrary subset of those services on each machine in my > (comedically-named) server farm. I'd like to be able to do this with > one TLS configuration, and preferably under a single port. > > i.e., access my services through > > https://server.me/service1/... > https://server.me/service2/... > https://server.me/service3/... > > Now, in python-land, which is largely where I come from, I'd set up > Apache with mod-wsgi, and deploy each WSGI app to a specific URL > within the same URL namespace. I'm not quite sure how to do that > easily with erlang+cowboy, because there seems to be no easy way of > treating a webapp as a unit within a larger server configuration. I > keep coming to one of two approaches: > > 1) Write each service completely independently (as HTTP), run it on a > distinct port, and splice together the URL namespaces through a > reverse proxy on a "normal" web server like Apache. > > 2) Find some way to automatically write a top-level router for cowboy, > for each set of services that I want to deploy to a machine. > > I don't much like option 1, but I like option 2 even less. I guess > I could write some kind of "top-level" app that, given a bunch of > webapp modules (via a configuration file of some kind), gets a router > for each module and transforms those routers into a single router > config. Does such a thing already exist? > > It all just feels a bit awkward, and I feel like I'm missing > something. What do other people do to put together this kind of setup? > > Hugo. > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -- Lo?c Hoguin http://ninenines.eu Author of The Erlanger Playbook, A book about software development using Erlang From llbgurs@REDACTED Wed Dec 16 04:27:29 2015 From: llbgurs@REDACTED (linbo liao) Date: Wed, 16 Dec 2015 11:27:29 +0800 Subject: [erlang-questions] Erlang VM will open all socket in every thread? In-Reply-To: References: Message-ID: Thanks Lukas. Maybe too many threads, and process open and close sockets frequently. We will set some kernal parameters to free them. Thanks, Linbo 2015-12-15 20:48 GMT+08:00 Lukas Larsson : > On Tue, Dec 15, 2015 at 12:25 PM, linbo liao wrote: > >> So this is not the reason why proc_inode_cache consume high memory ? >> > > I don't know what could cause high proc_inode_cache memory usage. I'm > pretty sure though that seeing the same socket in multiple tasks is the > expected behavior. > > Lukas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Wed Dec 16 05:14:55 2015 From: zxq9@REDACTED (zxq9) Date: Wed, 16 Dec 2015 13:14:55 +0900 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program In-Reply-To: <20151215142805.GA36571@fhebert-ltm1> References: <3544057.cvUYeKDRV2@changa> <20151215142805.GA36571@fhebert-ltm1> Message-ID: <17911212.joFRQBhcio@changa> On 2015?12?15? ??? 09:28:06 you wrote: > On 12/15, zxq9 wrote: > >The more fundamental problem, though, is that this is a compounding > >function that steps singly as it increments, but it is written in a way > >that it counts down, not up, computing the *entire* value chain below > >it all over again each step, and within each sub-step once the input > >value is over 60. > > > >Its an arithmetic fun-bomb! > > > >If you remove most of the intermediate functions (I think most but > >maybe profit/3 can be removed... ?) and compute the stepping values as > >arguments to an expanded defintion of total_balance this should run > >close to O(n). > > > >Refactoring for that may also expose some bugs... > > This is the true problem with this code. It's calling itself in so many > ways the whole thing explodes combinationally and yields exponential > increases in time. > > One quick way to work around that is through memoization with a macro > such as: > > %% memoization uses a process dictionary. While hackish and not portable > %% through processes, a PD of this kind has the advantage of being > %% garbage collected and being returned in a stack trace. There is also no > %% copying (see: comments over the macros) or locking needed. > -define(memoize(E), lazy_memoize(fun()-> E end)). > lazy_memoize(F) when is_function(F) -> > case erlang:get(F) of > undefined -> > erlang:put(F,F()), > erlang:get(F); > X -> X > end. > > Once that's done, wrap up the costly functions in ?memoize(Expression) > and the value will be stored in a process dictionary. > > I've done it a bit haphazardly, memoizing stuff at a glance that looked > recursive: https://gist.github.com/ferd/ab5fed3b8ffe4b226755 > > While the raw implementation stalled at ~80 iterations and became > impossible to run, memoization makes it far more workable: > > 1> c(pension2). > {ok,pension2} > 2> timer:tc(pension2, totalBalance, [8000]). > {354750,-5.145757524814171e19} > 3> timer:tc(pension2, totalBalance, [8000]). > {11,-5.145757524814171e19} > > Every follow-up call is then cheaper. Of course, the additional cost > comes in memory: > > 4> erlang:process_info(self(), dictionary). > {dictionary,[{#Fun,0}, > {#Fun,-1325845925.5710535}, > {#Fun,0.0}, > {#Fun,16858.175941786}, > {#Fun,-4116.625402543886}, > {...}|...]} > 5> length(element(2,erlang:process_info(self(), dictionary))). > 103703 > 6> erlang_term:byte_size(erlang:process_info(self(), dictionary)). > 66125376 > > (the last call here uses https://github.com/okeuday/erlang_term). That's > 66mb of storage for these terms in the process dictionary at the very > least (for 8000 iterations) and the host OS reports ~100mb usage for the > whole VM. > > Rewriting things to repeat fewer of these operations (building up from 0 > to N, rather than N down to 0 with a lot of repetitions) would probably > save memory a whole lot. If refactoring is out of the question and > memory is cheap to you, then memoization is a quick way to get something > out of it. > > Be mindful however that when memoizing in the process dictionary, every > process that runs the operation will end up carrying that data. Moving > it to ETS if the calls have to be distributed is an option, after a > while it becomes very cheap. Flushing the cache from time to time (like > a manual GC) is also a thing one may want to do. > > Ultimately refactoring, if possible, will prove the best long-term > solution. To beat a dead horse... I messed around with this a little to demonstrate to the OP the difference between stepping and constantly re-computing. Testing the original functions is not very easy because they aren't tail recursive (so I can't just jump into the middle of a single iteration and check values), but taking a look at values as they occur, the remove/3 function appears to never be called, and tax/2 returns 0.0 in every actual case except when T is 60 (where the case is always fixed: `tax(2214.5751488127876, 60) -> 306.95219504336956`): tax(X,T) -> Portfolio = portfolio(T), Tax = profit(X,T,Portfolio)*taxRate(), io:format("tax(~tp, ~tp) -> ~tp~n", [X, T, Tax]), Tax. %... profit(X,T,[{Amount,Time}|Tail]) -> Profit = case currentValue(Amount,T-Time) > X of true -> X - originalValue(X,T-Time); false -> X - (Amount + profit((X - currentValue(Amount,T-Time)),T,Tail)) end, ok = io:format("profit(~tp, ~tp, [{~tp, ~tp} | _]) -> ~tp~n", [X, T, Amount, Time, Profit]), Profit. Gives: 1> pension2:totalBalance(63). [-- Snipping a *lot* of iterations out of this... it goes wild --] tax(2214.5751488127876, 60) -> 306.95219504336956 profit(831.592400470033, 60, [{993.4028899999998, 2} | _]) -> 171.8792078975349 profit(2086.6665047965516, 60, [{991.6999999999998, 1} | _]) -> 923.087296899017 profit(2214.5751488127876, 60, [{127.90864401623617, 60} | _]) -> 1163.5792078975344 tax(2214.5751488127876, 60) -> 306.95219504336956 profit(2218.3399265657695, 61, [{2649.3502215622093, 61} | _]) -> 0.0 tax(2218.3399265657695, 61) -> 0.0 profit(831.592400470033, 60, [{993.4028899999998, 2} | _]) -> 171.8792078975349 profit(2086.6665047965516, 60, [{991.6999999999998, 1} | _]) -> 923.087296899017 profit(2214.5751488127876, 60, [{127.90864401623617, 60} | _]) -> 1163.5792078975344 tax(2214.5751488127876, 60) -> 306.95219504336956 profit(2218.3399265657695, 61, [{2649.3502215622093, 61} | _]) -> 0.0 tax(2218.3399265657695, 61) -> 0.0 profit(831.592400470033, 60, [{993.4028899999998, 2} | _]) -> 171.8792078975349 profit(2086.6665047965516, 60, [{991.6999999999998, 1} | _]) -> 923.087296899017 profit(2214.5751488127876, 60, [{127.90864401623617, 60} | _]) -> 1163.5792078975344 tax(2214.5751488127876, 60) -> 306.95219504336956 profit(2222.1111044409313, 62, [{4868.812299814968, 62} | _]) -> 0.0 tax(2222.1111044409313, 62) -> 0.0 87032.35394558453 I'm *pretty* sure this is not what was intended. It makes the totalBalance/1 function create a curve that very quickly represents a negative pension -- which I also don't think was the intent. So that is almost certainly a logical flaw in the program. If it is not, then these fixed values should be precalculated as constants, because they never change (only the final element of the "portfolio" list does). But looking at the efficiency angle once again. This is a compound *stepping* function that can iterate UP to T = X where X is the argument to total_balance/1, taking advantage of all the computation that has gone before, instead of trying to compute downward toward T = 0 and having to create the entire identical forest of computation trees that underlie each descending value. Here is a version that has a VERY SIMILAR LOGICAL FLAW to the original program, but presents a slightly different diverging value over values of X > 60 (because I have no idea what the actual intent of the original program was -- in this case because the values count up instead of down the head of the portfolio list is the highest value, so instead of constantly replacing the lowest value {990, 0} the highest value continues to climb). This version is written to step up, and is basically just one big function: -module(pension3). -export([total_balance/1]). %%% CONSTANTS -define(working_time_units, 60). transaction_fee() -> 10. ca_balance_start() -> 10000. sa_balance_start() -> 10000. net_income() -> 3000. expenses_start() -> 2000. inflation_rate() -> 1.0017. yield_rate() -> 1.004. tax_rate() -> 0.2638. %% total of what you own: total_balance(T) when T < 0 -> 0; total_balance(0) -> ca_balance_start() + sa_balance_start(); total_balance(T) -> Current = 0, CA = ca_balance_start(), SA = sa_balance_start(), Salary = net_income(), Expenses = expenses_start(), Portfolio = [], total_balance(T, Current, CA, SA, Salary, Expenses, Portfolio). total_balance(T, T, CA, SA, _, _, _) -> CA + SA; total_balance(T, Current, CA, SA, Salary, Expenses, Portfolio) when Current < ?working_time_units -> Inflation = inflation_rate(), TransactionFee = transaction_fee(), Investments = Salary - Expenses - TransactionFee, Fees = Investments + TransactionFee, NewCurrent = Current + 1, NewCA = CA + Salary - Expenses - Fees, NewSA = SA * yield_rate() + Investments, NewSalary = Salary * Inflation, NewExpenses = Expenses * Inflation, NewPortfolio = [{Investments, Current} | Portfolio], total_balance(T, NewCurrent, NewCA, NewSA, NewSalary, NewExpenses, NewPortfolio); total_balance(T, Current, CA, SA, Salary, Expenses, [{Amount, Time} | Rest]) -> Inflation = inflation_rate(), TransactionFee = transaction_fee(), Portfolio = [{Amount - original_value(Expenses, Current - Time), Current} | Rest], Tax = profit(Expenses, Current, Portfolio) * tax_rate(), Investments = -Expenses - Tax - TransactionFee, Fees = Investments + Tax + TransactionFee, NewCurrent = Current + 1, NewCA = CA + Salary - Expenses - Fees, NewSA = SA * yield_rate() + Investments, NewSalary = 0, NewExpenses = Expenses * Inflation, total_balance(T, NewCurrent, NewCA, NewSA, NewSalary, NewExpenses, Portfolio). profit(X, T, S) -> profit(X, T, S, 0). profit(X, _, [], A) -> X - A; profit(X, T, [{Amount, Time} | Tail], A) -> TLessTime = T - Time, case current_value(Amount, TLessTime) > X of true -> X - original_value(X, TLessTime); false -> profit(X, T, Tail, A + Amount) end. %% helper functions for calculation the current and original value of a position current_value(X,TD) -> X * math:pow(yield_rate(), TD). original_value(X,TD) -> X / math:pow(yield_rate(), TD). This can *certainly* be compressed in terms of lines of text, but when a function has 7 arguments I like the names of arguments and their origins to be painfully obvious. Like reading that function is slightly painful, and it should be in this case, because I want all that variable assignment to stick out and make the next call absolutely clear. The variable assignments in there was really documentation for myself as I wrote the different clauses. I can easily imagine people finger-pointing, saying "Oh, look at all those variable assignments! That's sooooo wasteful and will hurt performance" and *completely* overlook the nature of the function that is actually written compared to the original (which at first glance looks so nice with all those tiny functions, well, too many case statements for my taste, but whatever). Obviously the code above can be reduced in ways other than inline argument value construction, and there is no possible way that all the helper functions that existed originally are of no value -- but they should have been called from within a single, main iteration like this one that relied on the previously computed values instead of generating their own from scratch every time they were invoked. But since I have no idea what they were really supposed to do and they didn't work anyway, I'm left with this version. So let's compare these with timer:tc/3 1> timer:tc(pension2, totalBalance, [1]). {13,21030.0} 2> timer:tc(pension3, total_balance, [1]). {4,21030.0} 3> timer:tc(pension2, totalBalance, [25]). {221,47557.60140838164} 4> timer:tc(pension3, total_balance, [25]). {18,47557.60140838164} 5> timer:tc(pension2, totalBalance, [59]). {1018,91630.97931148569} 6> timer:tc(pension3, total_balance, [59]). {47,91630.97931148569} 7> timer:tc(pension2, totalBalance, [70]). {785665,73294.80080230196} 8> timer:tc(pension3, total_balance, [70]). {328,206288.05299963654} 9> timer:tc(pension2, totalBalance, [74]). {12821710,65186.052900269984} 10> timer:tc(pension3, total_balance, [74]). {430,234679.70050062318} 11> timer:tc(pension3, total_balance, [100]). {1107,184895.28643276813} 12> timer:tc(pension3, total_balance, [100000]). {138851,-1.7311541595393215e180} pension3:totalBalance/1 stalls out at 75 for me. I'm sure it can complete, but I didn't wait for it. It obviously explodes at that size. pension3:total_balance/1 blows up at 1000000, apparently because the float value is exhausted: 13> timer:tc(pension3, total_balance, [1000000]). ** exception error: an error occurred when evaluating an arithmetic expression in function pension3:total_balance/7 (pension3.erl, line 54) in call from timer:tc/3 (timer.erl, line 197) Computing 100,000 months of pension time by incremending values is 92 times faster than computing 70 months by decrementing. That ratio only gets worse as the values increase, as pension2 literally explodes in complexity when X > 60. The overall fact remains, though, that each input always maps to a fixed output ("bijective"? Not good with math terms.), and it appears this is dealing with months of real time. I doubt any pension calculation is going to remain accurate more than 5 years (or more than 5 months, given current events...), and making pension projectsion is almost *certainly* unnecessary beyond, say, 1200 months. With that in mind it makes sense to run this computation *once* and store the result in a chart. Then it doesn't matter if it takes 5 days to compute the result, the answers are in a K-V table and instantly available for lookup. Also -- I wrote above that the helper functions "didn't work". They MAY HAVE BEEN ACCURATE. Occasionally in finance and some other arithmetic-heavy domains you find yourself needing a ton of little functions that calculate a particular value, but you don't initially have time to analyze the nature of those functions. Sometimes even a cursory analysis will lead you to recognize that the nature of those functions lends themselves to shortcuts (actually, functional shortcuts pop up a lot in game servers as well). If the functions above are actually accurate most of them can be collapsed to simple cases or single replacements, instead of complete sequential searches to test for things you already know don't apply. -Craig From aseigo@REDACTED Tue Dec 15 22:57:42 2015 From: aseigo@REDACTED (Aaron J. Seigo) Date: Tue, 15 Dec 2015 22:57:42 +0100 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program In-Reply-To: References: Message-ID: <50009725.El7OhDzJc8@serenity> On Tuesday, December 15, 2015 01.45:26 Paul Wolf wrote: > caBalance(T) -> caBalance(T-1) + salary(T-1) - expenses(T-1) - > tradingFees(T-1). https://en.wikipedia.org/wiki/Tail_call this is going to create a stack of T calls to caBalance (let alone all the others) as it is not tail recursive. by using an accumulator to build our result, the VM will be able to clear the call stack each time: caBalance(T) when is_integer(T) -> caBalance(T, 0). caBalance(0, Total) -> Total; caBalance(T, Total) -> caBlance(T - 1, Total + salary(T-1) - expenses(T-1) - tradingFees(T-1)). or you could use a sequence and a fold: caBalance(T) when is_integer(T) -> lists:foldl(fun(Month, Acc) -> salary(Month) + expenses(Month) + tradingFees(Month) + Acc end, 0, lists:seq(1, T)). there are a number of functions in your code that suffer from this, including profit/3, expenses/1. on another note, as I've gotten more comfortable w/Erlang, I've found myself using case less and less and doing things like this: trading_fees(T) -> trading_fees_when_working(T, working(T)). trading_fees_when_working(T, true) -> investments(T) + transactionFee(); trading_fees_when_working(T, false) -> trading_fees_when_working(T, true) + tax(expenses(T),T). I find it easier to read such code as it expands and gets more complex, and in this case it makes it more evident that when working(T) == false, it is actually the same as when working but with taxes. readability ftw. but maybe that's just me :) -- Aaron Seigo -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From mrtndimitrov@REDACTED Wed Dec 16 08:48:21 2015 From: mrtndimitrov@REDACTED (Martin Koroudjiev) Date: Wed, 16 Dec 2015 09:48:21 +0200 Subject: [erlang-questions] The preprocessor and the include files Message-ID: <56711745.7010302@gmail.com> Hello, During compile time, we need to have the *.hrl files available but do they need to be distributed along with the ebean files? Thanks for your time! Best regards, Martin From samuelrivas@REDACTED Wed Dec 16 09:10:35 2015 From: samuelrivas@REDACTED (Samuel) Date: Wed, 16 Dec 2015 09:10:35 +0100 Subject: [erlang-questions] The preprocessor and the include files In-Reply-To: <56711745.7010302@gmail.com> References: <56711745.7010302@gmail.com> Message-ID: You can try that out easily, but the answer is "no". The hrl files are only needed for the preprocessor, once the code is compiled it doesn't depend on them any more On 16 December 2015 at 08:48, Martin Koroudjiev wrote: > Hello, > > During compile time, we need to have the *.hrl files available but do > they need to be distributed along with the ebean files? > > Thanks for your time! > Best regards, > Martin > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -- Samuel From borja.carbo@REDACTED Wed Dec 16 09:59:23 2015 From: borja.carbo@REDACTED (borja.carbo@REDACTED) Date: Wed, 16 Dec 2015 08:59:23 +0000 Subject: [erlang-questions] --xinclude in xsltproc Message-ID: <20151216085923.Horde.kO8e0Le4ufo0iFwhhL6snA3@whm.todored.info> Hello According to the examples on erl_docgen documentation seems the parameter --xinclude should be empty for the different output formats to be generated. However it is not the same on the code in the github document https://github.com/erlang/otp/blob/maint/make/otp_release_targets.mk where for example for the indext.html case it is used: --xinclude $(TOP_SPECS_PARAM) $(MOD2APP_PARAM) \ and for .fo it is used: --xinclude $(TOP_SPECS_PARAM) \ It is true that at the begining of the document it is indicated: ifneq ($(TOP_SPECS_FILE),) TOP_SPECS_PARAM = --stringparam specs_file "`pwd`/$(TOP_SPECS_FILE)" endif MOD2APP = $(ERL_TOP)/make/$(TARGET)/mod2app.xml ifneq ($(wildcard $(MOD2APP)),) MOD2APP_PARAM = --stringparam mod2app_file "$(MOD2APP)" endif However this can not be generalized to any user code documentation since those sentences refere to some specific directoty (by the `pwd`) and although I have search for the mod2app.xml file I have not been able to find it inside the github tree. Please any help? I have tried to just list the *.xml files in my case needed (map.xml and others), all of them placed on the same directory and entering the xsltproc command from that path and the result was that: compilation error: file map.xml line 3 element erlref xsltParseStylesheetProcess : document is not a stylesheet However, when applying the xsltproc command to the that file (map.xml) there was not problem to find the stylesheet. Best Regards / Borja From mrtndimitrov@REDACTED Wed Dec 16 09:50:51 2015 From: mrtndimitrov@REDACTED (Martin Koroudjiev) Date: Wed, 16 Dec 2015 10:50:51 +0200 Subject: [erlang-questions] The preprocessor and the include files In-Reply-To: References: <56711745.7010302@gmail.com> Message-ID: <567125EB.6060303@gmail.com> Thank you! On 12/16/2015 10:10 AM, Samuel wrote: > You can try that out easily, but the answer is "no". The hrl files are > only needed for the preprocessor, once the code is compiled it doesn't > depend on them any more > > On 16 December 2015 at 08:48, Martin Koroudjiev wrote: >> Hello, >> >> During compile time, we need to have the *.hrl files available but do >> they need to be distributed along with the ebean files? >> >> Thanks for your time! >> Best regards, >> Martin >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions > > From s.j.thompson@REDACTED Wed Dec 16 10:21:36 2015 From: s.j.thompson@REDACTED (Simon Thompson) Date: Wed, 16 Dec 2015 09:21:36 +0000 Subject: [erlang-questions] Wrangler1.2: Added symbolic evaluation and behaviour refactorings References: Message-ID: <1A1A49AB-C0F6-4CB5-842E-D0BCADF7BE1F@kent.ac.uk> The Wrangler team is very happy to announce a new release of the system which pull together a number of changes and bug-fixes. We?ve added functionality for extracting behaviours from existing code, for some aspects of symbolic evaluation and some more slicing. There are also quite a few bug fixes. Further details below. Added "Symbolic evaluation" refactorings Added "Behaviour refactorings" Added "Backward intra-function slice" inspector Added support for $(DESTDIR) in Makefile Fixed typos, rewrited, added typespecs, and cleaned up some parts of the code Fixed problems: With prettyprinter when patterns are empty but there is a guard In "Add a WS operation" transformation In "Remove a WS opertation argument" transformation In "Swap Function Arguments" refactoring In removal of functions from custom refactorings In error message of API migration In "Rename module" command output With atoms that span multiple lines With callback attribute With maps Added check for lists in generalised unification Restored function api_refac:reset_pos_and_range/1 Removed dependency on r17 eep View it on GitHub . Pablo, Huiqing and Simon Simon Thompson | Professor of Logic and Computation School of Computing | University of Kent | Canterbury, CT2 7NF, UK s.j.thompson@REDACTED | M +44 7986 085754 | W www.cs.kent.ac.uk/~sjt -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: AAmtcH-WBh-12s4tSdWkUbsBEvwzQNB1ks5pPt5xgaJpZM4G03gP.gif Type: image/gif Size: 35 bytes Desc: not available URL: From andre@REDACTED Wed Dec 16 12:47:11 2015 From: andre@REDACTED (=?utf-8?Q?Andr=C3=A9_Cruz?=) Date: Wed, 16 Dec 2015 11:47:11 +0000 Subject: [erlang-questions] Strange interaction between Docker and Erlangs ports (exit_status lost) Message-ID: Hello. I've run into a strange problem when using Erlang and Docker. I have a small PoC that uses open_port to launch an "ls" command and, under normal circumstances, I get the exit_status message and the program terminates. However, if I run the command directly via "docker run", it seems the message is lost. The "ls" command is executed and terminates, but I don't get the "exit_status" message. This issue seems related to an old message to the list: http://erlang.org/pipermail/erlang-questions/2013-September/075385.html The relevant code and Dockerfile can be found here: https://github.com/edevil/docker-erlang-bug The built image: https://hub.docker.com/r/edevil/docker-erlang-bug/ Docker issue: https://github.com/docker/docker/issues/8910#issuecomment-165072444 Now, I don't even know if this is a Docker issue, or an Erlang issue, or neither. Can someone shed some light on this issue? Thank you and best regards, Andr? Cruz -------------- next part -------------- An HTML attachment was scrubbed... URL: From on-your-mark@REDACTED Wed Dec 16 16:09:53 2015 From: on-your-mark@REDACTED (YuanZhiqian) Date: Wed, 16 Dec 2015 23:09:53 +0800 Subject: [erlang-questions] Use Erlang AMQP client library Message-ID: Hi guys, I have a frustrating time tonight trying to use rabbitmq library for erlang in my project. There're two ways provided by the official sites: .ez archive file as well as in source code. https://www.rabbitmq.com/erlang-client.html I wonder which one I should choose. I prefer to use the source code because in erlang's documentation page, it is said the loading code from archive file is an experimental feature and is not recommend, http://www.erlang.org/doc/man/code.html, besides, I'm not very sure how to use a .ez archive file. On the other hand, the source code won't compile through, I searched in google and people says that it is not wise to compile source code by one's own to use rabbitmq at present, because of some tricky problem. I have tried using rebar and make to build the source code, but both failed with complaints as folllows: $ rebar compile==> amqp_client-3.5.7-src (compile)include/amqp_client.hrl:20: can't find include lib "rabbit_common/include/rabbit.hrl"include/amqp_client.hrl:21: can't find include lib "rabbit_common/include/rabbit_framing.hrl"src/amqp_gen_connection.erl:313: undefined macro 'INTERNAL_ERROR'include/amqp_client.hrl:23: record 'P_basic' undefinedsrc/amqp_gen_connection.erl:174: record amqp_error undefinedsrc/amqp_gen_connection.erl:176: record amqp_error undefinedsrc/amqp_gen_connection.erl:212: function internal_error/3 undefinedsrc/amqp_gen_connection.erl:215: record 'connection.close' undefinedsrc/amqp_gen_connection.erl:266: record 'connection.close' undefinedsrc/amqp_gen_connection.erl:273: record 'connection.close' undefinedsrc/amqp_gen_connection.erl:275: record 'connection.close_ok' undefinedsrc/amqp_gen_connection.erl:280: record 'connection.blocked' undefinedsrc/amqp_gen_connection.erl:285: record 'connection.unblocked' undefinedsrc/amqp_gen_connection.erl:291: record amqp_error undefinedsrc/amqp_gen_connection.erl:352: record 'connection.close' undefinedsrc/amqp_gen_connection.erl:355: record 'connection.close' undefinedsrc/amqp_gen_connection.erl:357: variable 'Code' is unboundsrc/amqp_gen_connection.erl:357: variable 'Text' is unboundsrc/amqp_gen_connection.erl:368: record 'connection.close_ok' undefinedsrc/amqp_gen_connection.erl:290: Warning: variable 'Other' is unused $ makerm -f deps.mkecho src/amqp_auth_mechanisms.erl:src/amqp_channel.erl:src/amqp_channels_manager.erl:src/amqp_channel_sup.erl:src/amqp_channel_sup_sup.erl:src/amqp_client.erl:src/amqp_connection.erl:src/amqp_connection_sup.erl:src/amqp_connection_type_sup.erl:src/amqp_direct_connection.erl:src/amqp_direct_consumer.erl:src/amqp_gen_connection.erl:src/amqp_gen_consumer.erl:src/amqp_main_reader.erl:src/amqp_network_connection.erl:src/amqp_rpc_client.erl:src/amqp_rpc_server.erl:src/amqp_selective_consumer.erl:src/amqp_sup.erl:src/amqp_uri.erl:src/rabbit_routing_util.erl:src/uri_parser.erl:include/amqp_client.hrl:include/amqp_client_internal.hrl:include/amqp_gen_consumer_spec.hrl:include/rabbit_routing_prefixes.hrl: | escript ../rabbitmq-server/generate_deps deps.mk ebinescript: Failed to open file: ../rabbitmq-server/generate_depsmake: *** No rule to make target `deps.mk', needed by `ebin/amqp_auth_mechanisms.beam'. Stop. Besides, in the README file, the instruction tells me to do like this: $ git clone https://github.com/rabbitmq/rabbitmq-codegen.git $ git clone https://github.com/rabbitmq/rabbitmq-server.git $ git clone https://github.com/rabbitmq/rabbitmq-erlang-client.git $ cd rabbitmq-erlang-client $ make Well, it won't compile too! And I am even more confused why I need to clone the rabbit erlang client again in this case, since the README file itself is already in the client's source code directory. To make a conclusion, my trouble here is: 1. Should I use .ez archive file or source code?2. How to use .ez archive file, is it safe?3. How should I deal with the compile errors?4. Why do I need to clone another client's source code as told by the README file? I am used to put a library in "deps" folder in my project and rebar would build them altogether for me, I think that's a preferable choice, quite clear in terms of project's folder layout at least. Any advice is appreciated :) Best regardsZhiqian -------------- next part -------------- An HTML attachment was scrubbed... URL: From kenneth@REDACTED Wed Dec 16 16:31:14 2015 From: kenneth@REDACTED (Kenneth Lundin) Date: Wed, 16 Dec 2015 16:31:14 +0100 Subject: [erlang-questions] Erlang/OTP 18.2 has been released Message-ID: Erlang/OTP 18.2 is a service release on the 18 track with mostly bug fixes, but is does contain a number of new features and characteristics improvements as well. Some highlights of the release are: - ssl: Add configurable upper limit for session cache. - erts: Add function enif_getenv to read OS environment variables in a portable way from NIFs. - kernel: Add {line_delim, byte()} option to inet:setopts/2 and decode_packet/3 - ssh: The 'ecdsa-sha2-nistp256', 'ecdsa-sha2-nistp384' and 'ecdsa-sha2-nistp521' signature algorithms for ssh are implemented. See RFC 5656. - ssh: The ssh:daemon option dh_gex_groups is extended to read a user provided ssh moduli file with generator-modulus pairs. The file is in openssh format. - Thanks to 41 different contributors! You can find the Release Notes with more detailed info at http://www.erlang.org/download/otp_src_18.2.readme You can download the full source distribution from http://www.erlang.org/download/otp_src_18.2.tar.gz Note: To unpack the TAR archive you need a GNU TAR compatible program. For installation instructions please read the README that is part of the distribution. You can also find the source code at github.com in the official Erlang repository. Git tag OTP-18.2 https://github.com/erlang/otp/tree/OTP-18.2 The Windows binary distributions can be downloaded from http://www.erlang.org/download/otp_win32_18.2.exe http://www.erlang.org/download/otp_win64_18.2.exe You can also download the complete HTML documentation or the Unix manual files http://www.erlang.org/download/otp_doc_html_18.2.tar.gz http://www.erlang.org/download/otp_doc_man_18.2.tar.gz You can also read the documentation on-line here: (see the Release Notes mentioned above for release notes which are not updated in the doc, but the new functionality is) http://www.erlang.org/doc/ We also want to thank those that sent us patches, suggestions and bug reports. The Erlang/OTP Team at Ericsson -------------- next part -------------- An HTML attachment was scrubbed... URL: From binarin@REDACTED Wed Dec 16 18:06:55 2015 From: binarin@REDACTED (Alexey Lebedeff) Date: Wed, 16 Dec 2015 20:06:55 +0300 Subject: [erlang-questions] Use Erlang AMQP client library In-Reply-To: References: Message-ID: Hi, If you are using erlang.mk or rebar3 as build tool, you could just add amqp_client to your dependencies, and everything will start working automagically. If for some legacy reasons you are using rebar2, then you could use rebar-friendly fork of client at https://github.com/jbrisbin/amqp_client Best, Alexey 2015-12-16 18:09 GMT+03:00 YuanZhiqian : > Hi guys, > > I have a frustrating time tonight trying to use rabbitmq library for > erlang in my project. There're two ways provided by the official sites: .ez > archive file as well as in source code. > https://www.rabbitmq.com/erlang-client.html > > I wonder which one I should choose. I prefer to use the source code > because in erlang's documentation page, it is said the loading code from > archive file is an experimental feature and is not recommend, > http://www.erlang.org/doc/man/code.html, besides, I'm not very sure how > to use a .ez archive file. > > On the other hand, the source code won't compile through, I searched in > google and people says that it is not wise to compile source code by one's > own to use rabbitmq at present, because of some tricky problem. > > I have tried using rebar and make to build the source code, but both > failed with complaints as folllows: > > > $ rebar compile > ==> amqp_client-3.5.7-src (compile) > include/amqp_client.hrl:20: can't find include lib > "rabbit_common/include/rabbit.hrl" > include/amqp_client.hrl:21: can't find include lib > "rabbit_common/include/rabbit_framing.hrl" > src/amqp_gen_connection.erl:313: undefined macro 'INTERNAL_ERROR' > include/amqp_client.hrl:23: record 'P_basic' undefined > src/amqp_gen_connection.erl:174: record amqp_error undefined > src/amqp_gen_connection.erl:176: record amqp_error undefined > src/amqp_gen_connection.erl:212: function internal_error/3 undefined > src/amqp_gen_connection.erl:215: record 'connection.close' undefined > src/amqp_gen_connection.erl:266: record 'connection.close' undefined > src/amqp_gen_connection.erl:273: record 'connection.close' undefined > src/amqp_gen_connection.erl:275: record 'connection.close_ok' undefined > src/amqp_gen_connection.erl:280: record 'connection.blocked' undefined > src/amqp_gen_connection.erl:285: record 'connection.unblocked' undefined > src/amqp_gen_connection.erl:291: record amqp_error undefined > src/amqp_gen_connection.erl:352: record 'connection.close' undefined > src/amqp_gen_connection.erl:355: record 'connection.close' undefined > src/amqp_gen_connection.erl:357: variable 'Code' is unbound > src/amqp_gen_connection.erl:357: variable 'Text' is unbound > src/amqp_gen_connection.erl:368: record 'connection.close_ok' undefined > src/amqp_gen_connection.erl:290: Warning: variable 'Other' is unused > > > > $ make > rm -f deps.mk > echo > src/amqp_auth_mechanisms.erl:src/amqp_channel.erl:src/amqp_channels_manager.erl:src/amqp_channel_sup.erl:src/amqp_channel_sup_sup.erl:src/amqp_client.erl:src/amqp_connection.erl:src/amqp_connection_sup.erl:src/amqp_connection_type_sup.erl:src/amqp_direct_connection.erl:src/amqp_direct_consumer.erl:src/amqp_gen_connection.erl:src/amqp_gen_consumer.erl:src/amqp_main_reader.erl:src/amqp_network_connection.erl:src/amqp_rpc_client.erl:src/amqp_rpc_server.erl:src/amqp_selective_consumer.erl:src/amqp_sup.erl:src/amqp_uri.erl:src/rabbit_routing_util.erl:src/uri_parser.erl:include/amqp_client.hrl:include/amqp_client_internal.hrl:include/amqp_gen_consumer_spec.hrl:include/rabbit_routing_prefixes.hrl: > | escript ../rabbitmq-server/generate_deps deps.mk ebin > escript: Failed to open file: ../rabbitmq-server/generate_deps > make: *** No rule to make target `deps.mk', needed by > `ebin/amqp_auth_mechanisms.beam'. Stop. > > > > Besides, in the README file, the instruction tells me to do like this: > > > $ git clone https://github.com/rabbitmq/rabbitmq-codegen.git > $ git clone https://github.com/rabbitmq/rabbitmq-server.git > $ git clone https://github.com/rabbitmq/rabbitmq-erlang-client.git > $ cd rabbitmq-erlang-client > $ make > > Well, it won't compile too! And I am even more confused why I need to > clone the rabbit erlang client again in this case, since the README file > itself is already in the client's source code directory. > > > To make a conclusion, my trouble here is: > > 1. Should I use .ez archive file or source code? > 2. How to use .ez archive file, is it safe? > 3. How should I deal with the compile errors? > 4. Why do I need to clone another client's source code as told by the > README file? > > I am used to put a library in "deps" folder in my project and rebar would > build them altogether for me, I think that's a preferable choice, quite > clear in terms of project's folder layout at least. > > > Any advice is appreciated :) > > Best regards > Zhiqian > > > > > > > > > > > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From on-your-mark@REDACTED Wed Dec 16 18:11:32 2015 From: on-your-mark@REDACTED (YuanZhiqian) Date: Thu, 17 Dec 2015 01:11:32 +0800 Subject: [erlang-questions] Use Erlang AMQP client library In-Reply-To: References: Message-ID: Thanks Alexey:), I'll have a try tomorrow. However there seems a lot of compilation errors when building rabbitmq from code, do you have a clue? ???? iPhone > ? 2015?12?17??01:06?Alexey Lebedeff ??? > > Hi, > > If you are using erlang.mk or rebar3 as build tool, you could just add amqp_client to your dependencies, and everything will start working automagically. > > If for some legacy reasons you are using rebar2, then you could use rebar-friendly fork of client at https://github.com/jbrisbin/amqp_client > > > Best, > Alexey > > 2015-12-16 18:09 GMT+03:00 YuanZhiqian : >> Hi guys, >> >> I have a frustrating time tonight trying to use rabbitmq library for erlang in my project. There're two ways provided by the official sites: .ez archive file as well as in source code. https://www.rabbitmq.com/erlang-client.html >> >> I wonder which one I should choose. I prefer to use the source code because in erlang's documentation page, it is said the loading code from archive file is an experimental feature and is not recommend, http://www.erlang.org/doc/man/code.html, besides, I'm not very sure how to use a .ez archive file. >> >> On the other hand, the source code won't compile through, I searched in google and people says that it is not wise to compile source code by one's own to use rabbitmq at present, because of some tricky problem. >> >> I have tried using rebar and make to build the source code, but both failed with complaints as folllows: >> >> >> $ rebar compile >> ==> amqp_client-3.5.7-src (compile) >> include/amqp_client.hrl:20: can't find include lib "rabbit_common/include/rabbit.hrl" >> include/amqp_client.hrl:21: can't find include lib "rabbit_common/include/rabbit_framing.hrl" >> src/amqp_gen_connection.erl:313: undefined macro 'INTERNAL_ERROR' >> include/amqp_client.hrl:23: record 'P_basic' undefined >> src/amqp_gen_connection.erl:174: record amqp_error undefined >> src/amqp_gen_connection.erl:176: record amqp_error undefined >> src/amqp_gen_connection.erl:212: function internal_error/3 undefined >> src/amqp_gen_connection.erl:215: record 'connection.close' undefined >> src/amqp_gen_connection.erl:266: record 'connection.close' undefined >> src/amqp_gen_connection.erl:273: record 'connection.close' undefined >> src/amqp_gen_connection.erl:275: record 'connection.close_ok' undefined >> src/amqp_gen_connection.erl:280: record 'connection.blocked' undefined >> src/amqp_gen_connection.erl:285: record 'connection.unblocked' undefined >> src/amqp_gen_connection.erl:291: record amqp_error undefined >> src/amqp_gen_connection.erl:352: record 'connection.close' undefined >> src/amqp_gen_connection.erl:355: record 'connection.close' undefined >> src/amqp_gen_connection.erl:357: variable 'Code' is unbound >> src/amqp_gen_connection.erl:357: variable 'Text' is unbound >> src/amqp_gen_connection.erl:368: record 'connection.close_ok' undefined >> src/amqp_gen_connection.erl:290: Warning: variable 'Other' is unused >> >> >> >> $ make >> rm -f deps.mk >> echo src/amqp_auth_mechanisms.erl:src/amqp_channel.erl:src/amqp_channels_manager.erl:src/amqp_channel_sup.erl:src/amqp_channel_sup_sup.erl:src/amqp_client.erl:src/amqp_connection.erl:src/amqp_connection_sup.erl:src/amqp_connection_type_sup.erl:src/amqp_direct_connection.erl:src/amqp_direct_consumer.erl:src/amqp_gen_connection.erl:src/amqp_gen_consumer.erl:src/amqp_main_reader.erl:src/amqp_network_connection.erl:src/amqp_rpc_client.erl:src/amqp_rpc_server.erl:src/amqp_selective_consumer.erl:src/amqp_sup.erl:src/amqp_uri.erl:src/rabbit_routing_util.erl:src/uri_parser.erl:include/amqp_client.hrl:include/amqp_client_internal.hrl:include/amqp_gen_consumer_spec.hrl:include/rabbit_routing_prefixes.hrl: | escript ../rabbitmq-server/generate_deps deps.mk ebin >> escript: Failed to open file: ../rabbitmq-server/generate_deps >> make: *** No rule to make target `deps.mk', needed by `ebin/amqp_auth_mechanisms.beam'. Stop. >> >> >> >> Besides, in the README file, the instruction tells me to do like this: >> >> >> $ git clone https://github.com/rabbitmq/rabbitmq-codegen.git >> $ git clone https://github.com/rabbitmq/rabbitmq-server.git >> $ git clone https://github.com/rabbitmq/rabbitmq-erlang-client.git >> $ cd rabbitmq-erlang-client >> $ make >> >> Well, it won't compile too! And I am even more confused why I need to clone the rabbit erlang client again in this case, since the README file itself is already in the client's source code directory. >> >> >> To make a conclusion, my trouble here is: >> >> 1. Should I use .ez archive file or source code? >> 2. How to use .ez archive file, is it safe? >> 3. How should I deal with the compile errors? >> 4. Why do I need to clone another client's source code as told by the README file? >> >> I am used to put a library in "deps" folder in my project and rebar would build them altogether for me, I think that's a preferable choice, quite clear in terms of project's folder layout at least. >> >> >> Any advice is appreciated :) >> >> Best regards >> Zhiqian >> >> >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From binarin@REDACTED Wed Dec 16 18:15:37 2015 From: binarin@REDACTED (Alexey Lebedeff) Date: Wed, 16 Dec 2015 20:15:37 +0300 Subject: [erlang-questions] Use Erlang AMQP client library In-Reply-To: References: Message-ID: Hi, Actually, building from source also works using the following 2 commands: $ git clone git clone https://github.com/rabbitmq/rabbitmq-erlang-client.git $ make -C rabbitmq-erlang-client You should have no reason to do it this way, but if it fails you probably have some problems with your Erlang environment. Best, Alexey 2015-12-16 18:09 GMT+03:00 YuanZhiqian : > Hi guys, > > I have a frustrating time tonight trying to use rabbitmq library for > erlang in my project. There're two ways provided by the official sites: .ez > archive file as well as in source code. > https://www.rabbitmq.com/erlang-client.html > > I wonder which one I should choose. I prefer to use the source code > because in erlang's documentation page, it is said the loading code from > archive file is an experimental feature and is not recommend, > http://www.erlang.org/doc/man/code.html, besides, I'm not very sure how > to use a .ez archive file. > > On the other hand, the source code won't compile through, I searched in > google and people says that it is not wise to compile source code by one's > own to use rabbitmq at present, because of some tricky problem. > > I have tried using rebar and make to build the source code, but both > failed with complaints as folllows: > > > $ rebar compile > ==> amqp_client-3.5.7-src (compile) > include/amqp_client.hrl:20: can't find include lib > "rabbit_common/include/rabbit.hrl" > include/amqp_client.hrl:21: can't find include lib > "rabbit_common/include/rabbit_framing.hrl" > src/amqp_gen_connection.erl:313: undefined macro 'INTERNAL_ERROR' > include/amqp_client.hrl:23: record 'P_basic' undefined > src/amqp_gen_connection.erl:174: record amqp_error undefined > src/amqp_gen_connection.erl:176: record amqp_error undefined > src/amqp_gen_connection.erl:212: function internal_error/3 undefined > src/amqp_gen_connection.erl:215: record 'connection.close' undefined > src/amqp_gen_connection.erl:266: record 'connection.close' undefined > src/amqp_gen_connection.erl:273: record 'connection.close' undefined > src/amqp_gen_connection.erl:275: record 'connection.close_ok' undefined > src/amqp_gen_connection.erl:280: record 'connection.blocked' undefined > src/amqp_gen_connection.erl:285: record 'connection.unblocked' undefined > src/amqp_gen_connection.erl:291: record amqp_error undefined > src/amqp_gen_connection.erl:352: record 'connection.close' undefined > src/amqp_gen_connection.erl:355: record 'connection.close' undefined > src/amqp_gen_connection.erl:357: variable 'Code' is unbound > src/amqp_gen_connection.erl:357: variable 'Text' is unbound > src/amqp_gen_connection.erl:368: record 'connection.close_ok' undefined > src/amqp_gen_connection.erl:290: Warning: variable 'Other' is unused > > > > $ make > rm -f deps.mk > echo > src/amqp_auth_mechanisms.erl:src/amqp_channel.erl:src/amqp_channels_manager.erl:src/amqp_channel_sup.erl:src/amqp_channel_sup_sup.erl:src/amqp_client.erl:src/amqp_connection.erl:src/amqp_connection_sup.erl:src/amqp_connection_type_sup.erl:src/amqp_direct_connection.erl:src/amqp_direct_consumer.erl:src/amqp_gen_connection.erl:src/amqp_gen_consumer.erl:src/amqp_main_reader.erl:src/amqp_network_connection.erl:src/amqp_rpc_client.erl:src/amqp_rpc_server.erl:src/amqp_selective_consumer.erl:src/amqp_sup.erl:src/amqp_uri.erl:src/rabbit_routing_util.erl:src/uri_parser.erl:include/amqp_client.hrl:include/amqp_client_internal.hrl:include/amqp_gen_consumer_spec.hrl:include/rabbit_routing_prefixes.hrl: > | escript ../rabbitmq-server/generate_deps deps.mk ebin > escript: Failed to open file: ../rabbitmq-server/generate_deps > make: *** No rule to make target `deps.mk', needed by > `ebin/amqp_auth_mechanisms.beam'. Stop. > > > > Besides, in the README file, the instruction tells me to do like this: > > > $ git clone https://github.com/rabbitmq/rabbitmq-codegen.git > $ git clone https://github.com/rabbitmq/rabbitmq-server.git > $ git clone https://github.com/rabbitmq/rabbitmq-erlang-client.git > $ cd rabbitmq-erlang-client > $ make > > Well, it won't compile too! And I am even more confused why I need to > clone the rabbit erlang client again in this case, since the README file > itself is already in the client's source code directory. > > > To make a conclusion, my trouble here is: > > 1. Should I use .ez archive file or source code? > 2. How to use .ez archive file, is it safe? > 3. How should I deal with the compile errors? > 4. Why do I need to clone another client's source code as told by the > README file? > > I am used to put a library in "deps" folder in my project and rebar would > build them altogether for me, I think that's a preferable choice, quite > clear in terms of project's folder layout at least. > > > Any advice is appreciated :) > > Best regards > Zhiqian > > > > > > > > > > > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From binarin@REDACTED Wed Dec 16 18:36:01 2015 From: binarin@REDACTED (Alexey Lebedeff) Date: Wed, 16 Dec 2015 20:36:01 +0300 Subject: [erlang-questions] Use Erlang AMQP client library In-Reply-To: References: Message-ID: Hi, The better place to ask this sort of questions is at rabbitmq-users mailing list. But to summarize: - Failure with rebar is an expected outcome, it's not the right build tool - Failure with source tarball and `make` is an error - I've filed an issue at https://github.com/rabbitmq/rabbitmq-erlang-client/issues/29 - Failure to build from github checkout with `make` - couldn't reproduce it. Probably it's some problem with your build environment, but more details are needed if you want help from somebody ) Best, Alexey 2015-12-16 20:11 GMT+03:00 YuanZhiqian : > Thanks Alexey:), I'll have a try tomorrow. However there seems a lot of > compilation errors when building rabbitmq from code, do you have a clue? > > ???? iPhone > > ? 2015?12?17??01:06?Alexey Lebedeff ??? > > Hi, > > If you are using erlang.mk or rebar3 as build tool, you could just add > amqp_client to your dependencies, and everything will start working > automagically. > > If for some legacy reasons you are using rebar2, then you could use > rebar-friendly fork of client at https://github.com/jbrisbin/amqp_client > > > Best, > Alexey > > 2015-12-16 18:09 GMT+03:00 YuanZhiqian : > >> Hi guys, >> >> I have a frustrating time tonight trying to use rabbitmq library for >> erlang in my project. There're two ways provided by the official sites: .ez >> archive file as well as in source code. >> https://www.rabbitmq.com/erlang-client.html >> >> I wonder which one I should choose. I prefer to use the source code >> because in erlang's documentation page, it is said the loading code from >> archive file is an experimental feature and is not recommend, >> http://www.erlang.org/doc/man/code.html, besides, I'm not very sure how >> to use a .ez archive file. >> >> On the other hand, the source code won't compile through, I searched in >> google and people says that it is not wise to compile source code by one's >> own to use rabbitmq at present, because of some tricky problem. >> >> I have tried using rebar and make to build the source code, but both >> failed with complaints as folllows: >> >> >> $ rebar compile >> ==> amqp_client-3.5.7-src (compile) >> include/amqp_client.hrl:20: can't find include lib >> "rabbit_common/include/rabbit.hrl" >> include/amqp_client.hrl:21: can't find include lib >> "rabbit_common/include/rabbit_framing.hrl" >> src/amqp_gen_connection.erl:313: undefined macro 'INTERNAL_ERROR' >> include/amqp_client.hrl:23: record 'P_basic' undefined >> src/amqp_gen_connection.erl:174: record amqp_error undefined >> src/amqp_gen_connection.erl:176: record amqp_error undefined >> src/amqp_gen_connection.erl:212: function internal_error/3 undefined >> src/amqp_gen_connection.erl:215: record 'connection.close' undefined >> src/amqp_gen_connection.erl:266: record 'connection.close' undefined >> src/amqp_gen_connection.erl:273: record 'connection.close' undefined >> src/amqp_gen_connection.erl:275: record 'connection.close_ok' undefined >> src/amqp_gen_connection.erl:280: record 'connection.blocked' undefined >> src/amqp_gen_connection.erl:285: record 'connection.unblocked' undefined >> src/amqp_gen_connection.erl:291: record amqp_error undefined >> src/amqp_gen_connection.erl:352: record 'connection.close' undefined >> src/amqp_gen_connection.erl:355: record 'connection.close' undefined >> src/amqp_gen_connection.erl:357: variable 'Code' is unbound >> src/amqp_gen_connection.erl:357: variable 'Text' is unbound >> src/amqp_gen_connection.erl:368: record 'connection.close_ok' undefined >> src/amqp_gen_connection.erl:290: Warning: variable 'Other' is unused >> >> >> >> $ make >> rm -f deps.mk >> echo >> src/amqp_auth_mechanisms.erl:src/amqp_channel.erl:src/amqp_channels_manager.erl:src/amqp_channel_sup.erl:src/amqp_channel_sup_sup.erl:src/amqp_client.erl:src/amqp_connection.erl:src/amqp_connection_sup.erl:src/amqp_connection_type_sup.erl:src/amqp_direct_connection.erl:src/amqp_direct_consumer.erl:src/amqp_gen_connection.erl:src/amqp_gen_consumer.erl:src/amqp_main_reader.erl:src/amqp_network_connection.erl:src/amqp_rpc_client.erl:src/amqp_rpc_server.erl:src/amqp_selective_consumer.erl:src/amqp_sup.erl:src/amqp_uri.erl:src/rabbit_routing_util.erl:src/uri_parser.erl:include/amqp_client.hrl:include/amqp_client_internal.hrl:include/amqp_gen_consumer_spec.hrl:include/rabbit_routing_prefixes.hrl: >> | escript ../rabbitmq-server/generate_deps deps.mk ebin >> escript: Failed to open file: ../rabbitmq-server/generate_deps >> make: *** No rule to make target `deps.mk', needed by >> `ebin/amqp_auth_mechanisms.beam'. Stop. >> >> >> >> Besides, in the README file, the instruction tells me to do like this: >> >> >> $ git clone https://github.com/rabbitmq/rabbitmq-codegen.git >> $ git clone https://github.com/rabbitmq/rabbitmq-server.git >> $ git clone https://github.com/rabbitmq/rabbitmq-erlang-client.git >> $ cd rabbitmq-erlang-client >> $ make >> >> Well, it won't compile too! And I am even more confused why I need to >> clone the rabbit erlang client again in this case, since the README file >> itself is already in the client's source code directory. >> >> >> To make a conclusion, my trouble here is: >> >> 1. Should I use .ez archive file or source code? >> 2. How to use .ez archive file, is it safe? >> 3. How should I deal with the compile errors? >> 4. Why do I need to clone another client's source code as told by the >> README file? >> >> I am used to put a library in "deps" folder in my project and rebar would >> build them altogether for me, I think that's a preferable choice, quite >> clear in terms of project's folder layout at least. >> >> >> Any advice is appreciated :) >> >> Best regards >> Zhiqian >> >> >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pablo.platt@REDACTED Wed Dec 16 20:13:45 2015 From: pablo.platt@REDACTED (pablo platt) Date: Wed, 16 Dec 2015 21:13:45 +0200 Subject: [erlang-questions] Use Erlang AMQP client library In-Reply-To: References: Message-ID: rebar2 pre-hook for amqp_client https://gist.github.com/chvanikoff/84f5ddf5061a42de6d58 On Wed, Dec 16, 2015 at 7:36 PM, Alexey Lebedeff wrote: > Hi, > > The better place to ask this sort of questions is at rabbitmq-users > mailing list. > > But to summarize: > - Failure with rebar is an expected outcome, it's not the right build tool > - Failure with source tarball and `make` is an error - I've filed an issue > at https://github.com/rabbitmq/rabbitmq-erlang-client/issues/29 > - Failure to build from github checkout with `make` - couldn't reproduce > it. Probably it's some problem with your build environment, but more > details are needed if you want help from somebody ) > > Best, > Alexey > > 2015-12-16 20:11 GMT+03:00 YuanZhiqian : > >> Thanks Alexey:), I'll have a try tomorrow. However there seems a lot of >> compilation errors when building rabbitmq from code, do you have a clue? >> >> ???? iPhone >> >> ? 2015?12?17??01:06?Alexey Lebedeff ??? >> >> Hi, >> >> If you are using erlang.mk or rebar3 as build tool, you could just add >> amqp_client to your dependencies, and everything will start working >> automagically. >> >> If for some legacy reasons you are using rebar2, then you could use >> rebar-friendly fork of client at https://github.com/jbrisbin/amqp_client >> >> >> Best, >> Alexey >> >> 2015-12-16 18:09 GMT+03:00 YuanZhiqian : >> >>> Hi guys, >>> >>> I have a frustrating time tonight trying to use rabbitmq library for >>> erlang in my project. There're two ways provided by the official sites: .ez >>> archive file as well as in source code. >>> https://www.rabbitmq.com/erlang-client.html >>> >>> I wonder which one I should choose. I prefer to use the source code >>> because in erlang's documentation page, it is said the loading code from >>> archive file is an experimental feature and is not recommend, >>> http://www.erlang.org/doc/man/code.html, besides, I'm not very sure how >>> to use a .ez archive file. >>> >>> On the other hand, the source code won't compile through, I searched >>> in google and people says that it is not wise to compile source code by >>> one's own to use rabbitmq at present, because of some tricky problem. >>> >>> I have tried using rebar and make to build the source code, but both >>> failed with complaints as folllows: >>> >>> >>> $ rebar compile >>> ==> amqp_client-3.5.7-src (compile) >>> include/amqp_client.hrl:20: can't find include lib >>> "rabbit_common/include/rabbit.hrl" >>> include/amqp_client.hrl:21: can't find include lib >>> "rabbit_common/include/rabbit_framing.hrl" >>> src/amqp_gen_connection.erl:313: undefined macro 'INTERNAL_ERROR' >>> include/amqp_client.hrl:23: record 'P_basic' undefined >>> src/amqp_gen_connection.erl:174: record amqp_error undefined >>> src/amqp_gen_connection.erl:176: record amqp_error undefined >>> src/amqp_gen_connection.erl:212: function internal_error/3 undefined >>> src/amqp_gen_connection.erl:215: record 'connection.close' undefined >>> src/amqp_gen_connection.erl:266: record 'connection.close' undefined >>> src/amqp_gen_connection.erl:273: record 'connection.close' undefined >>> src/amqp_gen_connection.erl:275: record 'connection.close_ok' undefined >>> src/amqp_gen_connection.erl:280: record 'connection.blocked' undefined >>> src/amqp_gen_connection.erl:285: record 'connection.unblocked' undefined >>> src/amqp_gen_connection.erl:291: record amqp_error undefined >>> src/amqp_gen_connection.erl:352: record 'connection.close' undefined >>> src/amqp_gen_connection.erl:355: record 'connection.close' undefined >>> src/amqp_gen_connection.erl:357: variable 'Code' is unbound >>> src/amqp_gen_connection.erl:357: variable 'Text' is unbound >>> src/amqp_gen_connection.erl:368: record 'connection.close_ok' undefined >>> src/amqp_gen_connection.erl:290: Warning: variable 'Other' is unused >>> >>> >>> >>> $ make >>> rm -f deps.mk >>> echo >>> src/amqp_auth_mechanisms.erl:src/amqp_channel.erl:src/amqp_channels_manager.erl:src/amqp_channel_sup.erl:src/amqp_channel_sup_sup.erl:src/amqp_client.erl:src/amqp_connection.erl:src/amqp_connection_sup.erl:src/amqp_connection_type_sup.erl:src/amqp_direct_connection.erl:src/amqp_direct_consumer.erl:src/amqp_gen_connection.erl:src/amqp_gen_consumer.erl:src/amqp_main_reader.erl:src/amqp_network_connection.erl:src/amqp_rpc_client.erl:src/amqp_rpc_server.erl:src/amqp_selective_consumer.erl:src/amqp_sup.erl:src/amqp_uri.erl:src/rabbit_routing_util.erl:src/uri_parser.erl:include/amqp_client.hrl:include/amqp_client_internal.hrl:include/amqp_gen_consumer_spec.hrl:include/rabbit_routing_prefixes.hrl: >>> | escript ../rabbitmq-server/generate_deps deps.mk ebin >>> escript: Failed to open file: ../rabbitmq-server/generate_deps >>> make: *** No rule to make target `deps.mk', needed by >>> `ebin/amqp_auth_mechanisms.beam'. Stop. >>> >>> >>> >>> Besides, in the README file, the instruction tells me to do like this: >>> >>> >>> $ git clone https://github.com/rabbitmq/rabbitmq-codegen.git >>> $ git clone https://github.com/rabbitmq/rabbitmq-server.git >>> $ git clone https://github.com/rabbitmq/rabbitmq-erlang-client.git >>> $ cd rabbitmq-erlang-client >>> $ make >>> >>> Well, it won't compile too! And I am even more confused why I need to >>> clone the rabbit erlang client again in this case, since the README file >>> itself is already in the client's source code directory. >>> >>> >>> To make a conclusion, my trouble here is: >>> >>> 1. Should I use .ez archive file or source code? >>> 2. How to use .ez archive file, is it safe? >>> 3. How should I deal with the compile errors? >>> 4. Why do I need to clone another client's source code as told by the >>> README file? >>> >>> I am used to put a library in "deps" folder in my project and rebar >>> would build them altogether for me, I think that's a preferable choice, >>> quite clear in terms of project's folder layout at least. >>> >>> >>> Any advice is appreciated :) >>> >>> Best regards >>> Zhiqian >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> erlang-questions mailing list >>> erlang-questions@REDACTED >>> http://erlang.org/mailman/listinfo/erlang-questions >>> >>> >> > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugo@REDACTED Wed Dec 16 21:25:09 2015 From: hugo@REDACTED (Hugo Mills) Date: Wed, 16 Dec 2015 20:25:09 +0000 Subject: [erlang-questions] Deploying multiple webapps In-Reply-To: References: <20151215234147.GK26782@carfax.org.uk> <20151216001028.GL26782@carfax.org.uk> Message-ID: <20151216202509.GM26782@carfax.org.uk> Thanks for the reply. On Wed, Dec 16, 2015 at 12:17:50AM +0000, Chandru wrote: > On 16 December 2015 at 00:10, Hugo Mills wrote: > > > Hi, Chandru, > > > > On Tue, Dec 15, 2015 at 11:57:56PM +0000, Chandru wrote: > > > Why does all this have to be done in Erlang? > > > > > > It sounds like your best bet is to use something like > > nginx/varnish/haproxy > > > (or even Apache as you explained) to front your server farm. You can get > > > that component to then rewrite the URLs and route requests to wherever > > your > > > Erlang web services are located. I would do that rather than trying to do > > > everything in Erlang. > > > > Thanks for the advice. > > > > I guess I'm unhappy (probably with no good reason) with the idea of > > running each service in a separate erlang VM, and each one running on > > a separate port, and having to ensure that those ports aren't visible > > outside the machine (because they'll be running HTTP, not the desired > > HTTPS). > > > > Sorry, I don't think I explained clearly. My point was that you can run > multiple services in a single VM, on a single port. When your HTTPS request > hits your front-end (nginx/varnish/apache/whatever), you get it to do two > things. > - Handle all the TLS stuff > - Rewrite the URL in the request. (If the request from the client is > http://server.me/service1, rewrite it to http://internal.server1.me/service1, > http://server.me/service2 becomes > http://internal.server1.me/service2) OK, I'm happy with all of that as an idea. > - You can have a global cowboy handler (one module which is used in all > your backend erlang nodes) which provides internal routing for all your > services. At its most basic form, routing in cowboy is redirecting requests > based on URL to a module. So you just have to make sure this module is > common across all your erlang nodes, regardless of how you distribute your > services across nodes. So I make sure that I have a 'common_routing' (or whatever) module in each service -- do those have their own router (for the internal routing within each service) in each one? I guess that's something that Lo?c's work with Cowboy and RabbitMQ would help with. > What am I missing here? It's probably what I'm missing... How do I handle deploying this? I really don't want to be building a different release for each configuration, so how do I enable/disable the different services in each separate deployment? How do I make sure that the unused services don't run when the release is run, and don't have routing entries in the top-level router? (Ideally, not shipping the unused code at all, but if I'm building a single release, I guess that's not going to be possible). Hugo. > > > > > > On 15 December 2015 at 23:41, Hugo Mills wrote: > > > > > > > I've got a collection of small services, with minimal coupling > > > > between the back ends of those services (orchestration is done mostly > > > > client-side). I'd like to put an HTTPS interface in front of each one > > > > -- say, with cowboy. > > > > > > > > What I'd also like to be able to do, at least in principle, is > > > > deploy some arbitrary subset of those services on each machine in my > > > > (comedically-named) server farm. I'd like to be able to do this with > > > > one TLS configuration, and preferably under a single port. > > > > > > > > i.e., access my services through > > > > > > > > https://server.me/service1/... > > > > https://server.me/service2/... > > > > https://server.me/service3/... > > > > > > > > Now, in python-land, which is largely where I come from, I'd set up > > > > Apache with mod-wsgi, and deploy each WSGI app to a specific URL > > > > within the same URL namespace. I'm not quite sure how to do that > > > > easily with erlang+cowboy, because there seems to be no easy way of > > > > treating a webapp as a unit within a larger server configuration. I > > > > keep coming to one of two approaches: > > > > > > > > 1) Write each service completely independently (as HTTP), run it on a > > > > distinct port, and splice together the URL namespaces through a > > > > reverse proxy on a "normal" web server like Apache. > > > > > > > > 2) Find some way to automatically write a top-level router for cowboy, > > > > for each set of services that I want to deploy to a machine. > > > > > > > > I don't much like option 1, but I like option 2 even less. I guess > > > > I could write some kind of "top-level" app that, given a bunch of > > > > webapp modules (via a configuration file of some kind), gets a router > > > > for each module and transforms those routers into a single router > > > > config. Does such a thing already exist? > > > > > > > > It all just feels a bit awkward, and I feel like I'm missing > > > > something. What do other people do to put together this kind of setup? > > > > > > > > Hugo. > > > > > > -- Hugo Mills | Beware geeks bearing GIFs hugo@REDACTED carfax.org.uk | http://carfax.org.uk/ | PGP: E2AB1DE4 | -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: Digital signature URL: From hugo@REDACTED Wed Dec 16 21:31:02 2015 From: hugo@REDACTED (Hugo Mills) Date: Wed, 16 Dec 2015 20:31:02 +0000 Subject: [erlang-questions] Deploying multiple webapps In-Reply-To: <5670AE75.8000300@ninenines.eu> References: <20151215234147.GK26782@carfax.org.uk> <5670AE75.8000300@ninenines.eu> Message-ID: <20151216203102.GN26782@carfax.org.uk> Hi, Lo?c, On Wed, Dec 16, 2015 at 01:21:09AM +0100, Lo?c Hoguin wrote: > I have been working on RabbitMQ these past few months, and one of > the tasks I got assigned was to switch the Web components from > Webmachine to Cowboy. > > RabbitMQ already had a way to have different services on different > URLs running under the same port, as you describe. So my work was in > part to make it work with Cowboy. > > I didn't have to change much. > > I made a tiny middleware that queries the RabbitMQ component keeping > tracks of all the services, that then returns with the 'dispatch' > environment variable added. This middleware runs just before the > cowboy_router middleware: https://github.com/rabbitmq/rabbitmq-web-dispatch/blob/rabbitmq-management-63/src/rabbit_cowboy_middleware.erl#L27 > > The process keeping tracks of all services simply has a mapping from > /service1/ to the service's dispatch list (/service1/ is added > dynamically). > > This works pretty well, is all on one node, one port, just like you > need, and doesn't require much code. I suppose it wouldn't be too > difficult to extract and make it its own project, if that's > something people need. > > Note that everything I talk about here has not been merged yet; but > I'm getting close to completion (all tests pass, yada yada). It sounds like it would be a big part of a solution of what I'd like to do. I guess another analogy of what I'm trying to do is that it's very like a Java Servlet container: something that I can drop services into, where the container handles the main HTTP serving and holds the configuration of which services it runs, where I don't have to recompile, rebuild or redeploy the container to alter the set of services running in it. Hugo. > Cheers, > > On 12/16/2015 12:41 AM, Hugo Mills wrote: > > I've got a collection of small services, with minimal coupling > >between the back ends of those services (orchestration is done mostly > >client-side). I'd like to put an HTTPS interface in front of each one > >-- say, with cowboy. > > > > What I'd also like to be able to do, at least in principle, is > >deploy some arbitrary subset of those services on each machine in my > >(comedically-named) server farm. I'd like to be able to do this with > >one TLS configuration, and preferably under a single port. > > > >i.e., access my services through > > > >https://server.me/service1/... > >https://server.me/service2/... > >https://server.me/service3/... > > > > Now, in python-land, which is largely where I come from, I'd set up > >Apache with mod-wsgi, and deploy each WSGI app to a specific URL > >within the same URL namespace. I'm not quite sure how to do that > >easily with erlang+cowboy, because there seems to be no easy way of > >treating a webapp as a unit within a larger server configuration. I > >keep coming to one of two approaches: > > > >1) Write each service completely independently (as HTTP), run it on a > > distinct port, and splice together the URL namespaces through a > > reverse proxy on a "normal" web server like Apache. > > > >2) Find some way to automatically write a top-level router for cowboy, > > for each set of services that I want to deploy to a machine. > > > > I don't much like option 1, but I like option 2 even less. I guess > >I could write some kind of "top-level" app that, given a bunch of > >webapp modules (via a configuration file of some kind), gets a router > >for each module and transforms those routers into a single router > >config. Does such a thing already exist? > > > > It all just feels a bit awkward, and I feel like I'm missing > >something. What do other people do to put together this kind of setup? > > > > Hugo. > > > > > > > >_______________________________________________ > >erlang-questions mailing list > >erlang-questions@REDACTED > >http://erlang.org/mailman/listinfo/erlang-questions > > > -- Hugo Mills | You've read the project plan. Forget that. We're hugo@REDACTED carfax.org.uk | going to Do Stuff and Have Fun doing it. http://carfax.org.uk/ | PGP: E2AB1DE4 | Jeremy Frey -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: Digital signature URL: From mjtruog@REDACTED Wed Dec 16 22:16:55 2015 From: mjtruog@REDACTED (Michael Truog) Date: Wed, 16 Dec 2015 13:16:55 -0800 Subject: [erlang-questions] Deploying multiple webapps In-Reply-To: <20151216203102.GN26782@carfax.org.uk> References: <20151215234147.GK26782@carfax.org.uk> <5670AE75.8000300@ninenines.eu> <20151216203102.GN26782@carfax.org.uk> Message-ID: <5671D4C7.70004@gmail.com> On 12/16/2015 12:31 PM, Hugo Mills wrote: > Hi, Lo?c, > > On Wed, Dec 16, 2015 at 01:21:09AM +0100, Lo?c Hoguin wrote: >> I have been working on RabbitMQ these past few months, and one of >> the tasks I got assigned was to switch the Web components from >> Webmachine to Cowboy. >> >> RabbitMQ already had a way to have different services on different >> URLs running under the same port, as you describe. So my work was in >> part to make it work with Cowboy. >> >> I didn't have to change much. >> >> I made a tiny middleware that queries the RabbitMQ component keeping >> tracks of all the services, that then returns with the 'dispatch' >> environment variable added. This middleware runs just before the >> cowboy_router middleware: https://github.com/rabbitmq/rabbitmq-web-dispatch/blob/rabbitmq-management-63/src/rabbit_cowboy_middleware.erl#L27 >> >> The process keeping tracks of all services simply has a mapping from >> /service1/ to the service's dispatch list (/service1/ is added >> dynamically). >> >> This works pretty well, is all on one node, one port, just like you >> need, and doesn't require much code. I suppose it wouldn't be too >> difficult to extract and make it its own project, if that's >> something people need. >> >> Note that everything I talk about here has not been merged yet; but >> I'm getting close to completion (all tests pass, yada yada). > It sounds like it would be a big part of a solution of what I'd > like to do. > > I guess another analogy of what I'm trying to do is that it's very > like a Java Servlet container: something that I can drop services > into, where the container handles the main HTTP serving and holds the > configuration of which services it runs, where I don't have to > recompile, rebuild or redeploy the container to alter the set of > services running in it. CloudI provides a service abstraction for fault-tolerant messaging utilizing the Erlang VM. The problem you are describing is part of the quickstart at http://cloudi.org/#Erlang . CloudI supports non-Erlang programming languages, providing the same service abstraction, but you can use it for Erlang-only services (use the https://github.com/CloudI/cloudi_core/ repository to get only the Erlang/Elixir support). If you need to compare this to RabbitMQ, look at http://cloudi.org/faq.html#1_Messaging . An Erlang-only example that provides a bit more information than the quickstart is at https://github.com/CloudI/CloudI/tree/develop/examples/hello_world5 if you choose to embed CloudI services into an Erlang OTP application hierarchy (otherwise you would be using the CloudI configuration file to have an explicit startup order). From borja.carbo@REDACTED Wed Dec 16 23:59:07 2015 From: borja.carbo@REDACTED (borja.carbo@REDACTED) Date: Wed, 16 Dec 2015 22:59:07 +0000 Subject: [erlang-questions] --xinclude in xsltproc Message-ID: <20151216225907.Horde.t-xUhkHnLjvwOLlW3aOr0w2@whm.todored.info> If you do not mind I would lke to close this thread. It is not I have found the answer to my questions. It is just that taking the code at the github, I have concluded that the examples/skeletons of the documentation in erl_docgen do not lead so stright forward to what I was expecting. example: rahter than using: I used the . Other aspect was rather than using I used . rather than using I used Another point to manege with the success was the creation of one book.xml file (as a target of the xsltproc command) using as example that from: https://github.com/erlang/otp/blob/maint/lib/cosProperty/doc/src/book.xml. However there is a strategic question related to: When to use EDoc tools to build module/application documentation and when to use Erl_Docgen. Both creates different html layout for the documentation and also some different documentation structure. Anyway, sorry for the disturbance and thanks.. -------------- next part -------------- An embedded message was scrubbed... From: borja.carbo@REDACTED Subject: --xinclude in xsltproc Date: Wed, 16 Dec 2015 08:59:23 +0000 Size: 2080 URL: From borja.carbo@REDACTED Thu Dec 17 00:30:11 2015 From: borja.carbo@REDACTED (borja.carbo@REDACTED) Date: Wed, 16 Dec 2015 23:30:11 +0000 Subject: [erlang-questions] Erl_Docgen vs EDoc - when to use both? Message-ID: <20151216233011.Horde.JbuHes2E-316sk4W89l5bQ5@whm.todored.info> Both application Erl_Docgen and EDoc applications provide mechanisms to extract the documentation information from the modules. However when processed they do not have a common layout. HTML files generated using EDoc have direct links to the type definitions facilitating the navigation when the information is spread across modules. Erl_Docgen have a tool to extract the information first as xml file to later on transform it to html files. However Erl_Docgen does not provide the direct links but just information about where (which module) to look for the type definition (with a normal text see....). I.e. looks like to use Edoc is much better than Erl_Docgen. On the other hand the structure of html files from both applications do not correspond to a common documentation structure (and the internal html code is not compatible). Erl_Docgen generates the same look like as the standard documentation. Very good to facilitate the navigation between release notes, reference manuals and user guides. However EDoc is more oriented to a specific structure lead by one "overview" application central point. I.e. looks like to use Erl_Docgen would be recommendable. So here the question: Is there any strategy to allow to mix the results of both tools so we can get the best of both? I would be pleased to be wrong and overlook some informationon in the documentation. Do not hesitate to correct me. Best Regards / Borja From nathaniel@REDACTED Thu Dec 17 01:59:20 2015 From: nathaniel@REDACTED (Nathaniel Waisbrot) Date: Wed, 16 Dec 2015 19:59:20 -0500 Subject: [erlang-questions] Strange interaction between Docker and Erlangs ports (exit_status lost) In-Reply-To: References: Message-ID: Does adding a `-t` flag to `docker run` (to give it a TTY) make any difference? How long have you tried waiting? If it hangs forever, use `docker exec` to connect to the container and see if it's executing. Better: use `touch` instead of `ls` and then you can see if the process has run and exited. Have you tried other commands? How about `echo`? Have you tried giving a full path (`/bin/ls`)? On Wed, Dec 16, 2015 at 6:47 AM, Andr? Cruz wrote: > Hello. > > I've run into a strange problem when using Erlang and Docker. I have a > small PoC that uses open_port to launch an "ls" command and, under normal > circumstances, I get the exit_status message and the program terminates. > However, if I run the command directly via "docker run", it seems the > message is lost. The "ls" command is executed and terminates, but I don't > get the "exit_status" message. > > This issue seems related to an old message to the list: > http://erlang.org/pipermail/erlang-questions/2013-September/075385.html > > The relevant code and Dockerfile can be found here: > https://github.com/edevil/docker-erlang-bug > The built image: https://hub.docker.com/r/edevil/docker-erlang-bug/ > Docker issue: > https://github.com/docker/docker/issues/8910#issuecomment-165072444 > > Now, I don't even know if this is a Docker issue, or an Erlang issue, or > neither. Can someone shed some light on this issue? > > Thank you and best regards, > Andr? Cruz > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok@REDACTED Thu Dec 17 03:19:34 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Thu, 17 Dec 2015 15:19:34 +1300 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program In-Reply-To: References: , <1838048.PgCCDcU252@changa>, Message-ID: On the subject of macros vs functions for constants, there is a third way. (There should be a fourth way. Abstract patterns can do *so* many nice things...) For a constant that you don't want to use in a pattern or guard, you can declare a function and then tell the compiler to -inline it. My advice would be: - If you want to name something that must appear in a pattern or guard, you have no choice but to use a macro. (Until abstract patterns are implemented.) - Otherwise, always start by using plain functions. Get the code working, then benchmark and profile it. + If performance of these functions is OK, stop worrying. + Try inlining and benchmark+profile again. +++ It's generally better to try eliminating a calculation than to micro-optimise it. From ok@REDACTED Thu Dec 17 04:01:09 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Thu, 17 Dec 2015 16:01:09 +1300 Subject: [erlang-questions] Question about Erlang and Ada In-Reply-To: References: <566C1F77.1050701@power.com.pl> <566EA482.3080807@power.com.pl> Message-ID: <4946B8AA-CFFE-43D9-AD87-5D736AE8EF9D@cs.otago.ac.nz> On 15/12/2015, at 10:49 pm, Richard Carlsson wrote: > So yes, both are good to have, but don't trust "correct by construction" too much, and don't underestimate how many situations a clean quick restart can fix. Just yesterday I went to use a certain function that some idiot[%] had written, and found that it was a completely correct implementation of the wrong specification. [%] Me, of course. This is the fundamental limitation of "correct by construction", and the reason why formally verified programs still have to be tested. From raould@REDACTED Thu Dec 17 04:06:47 2015 From: raould@REDACTED (Raoul Duke) Date: Wed, 16 Dec 2015 19:06:47 -0800 Subject: [erlang-questions] Question about Erlang and Ada In-Reply-To: <4946B8AA-CFFE-43D9-AD87-5D736AE8EF9D@cs.otago.ac.nz> References: <566C1F77.1050701@power.com.pl> <566EA482.3080807@power.com.pl> <4946B8AA-CFFE-43D9-AD87-5D736AE8EF9D@cs.otago.ac.nz> Message-ID: > [%] Me, of course. Wait. You /sure/ that wasn't *me*? From on-your-mark@REDACTED Thu Dec 17 09:34:11 2015 From: on-your-mark@REDACTED (YuanZhiqian) Date: Thu, 17 Dec 2015 16:34:11 +0800 Subject: [erlang-questions] Use Erlang AMQP client library In-Reply-To: References: , , , , Message-ID: Thanks a lot! :) Date: Wed, 16 Dec 2015 21:13:45 +0200 Subject: Re: [erlang-questions] Use Erlang AMQP client library From: pablo.platt@REDACTED To: binarin@REDACTED CC: on-your-mark@REDACTED; erlang-questions@REDACTED rebar2 pre-hook for amqp_client https://gist.github.com/chvanikoff/84f5ddf5061a42de6d58 On Wed, Dec 16, 2015 at 7:36 PM, Alexey Lebedeff wrote: Hi, The better place to ask this sort of questions is at rabbitmq-users mailing list. But to summarize:- Failure with rebar is an expected outcome, it's not the right build tool- Failure with source tarball and `make` is an error - I've filed an issue at https://github.com/rabbitmq/rabbitmq-erlang-client/issues/29- Failure to build from github checkout with `make` - couldn't reproduce it. Probably it's some problem with your build environment, but more details are needed if you want help from somebody ) Best,Alexey 2015-12-16 20:11 GMT+03:00 YuanZhiqian : Thanks Alexey:), I'll have a try tomorrow. However there seems a lot of compilation errors when building rabbitmq from code, do you have a clue? ???? iPhone ? 2015?12?17??01:06?Alexey Lebedeff ??? Hi, If you are using erlang.mk or rebar3 as build tool, you could just add amqp_client to your dependencies, and everything will start working automagically. If for some legacy reasons you are using rebar2, then you could use rebar-friendly fork of client at https://github.com/jbrisbin/amqp_client Best,Alexey 2015-12-16 18:09 GMT+03:00 YuanZhiqian : Hi guys, I have a frustrating time tonight trying to use rabbitmq library for erlang in my project. There're two ways provided by the official sites: .ez archive file as well as in source code. https://www.rabbitmq.com/erlang-client.html I wonder which one I should choose. I prefer to use the source code because in erlang's documentation page, it is said the loading code from archive file is an experimental feature and is not recommend, http://www.erlang.org/doc/man/code.html, besides, I'm not very sure how to use a .ez archive file. On the other hand, the source code won't compile through, I searched in google and people says that it is not wise to compile source code by one's own to use rabbitmq at present, because of some tricky problem. I have tried using rebar and make to build the source code, but both failed with complaints as folllows: $ rebar compile==> amqp_client-3.5.7-src (compile)include/amqp_client.hrl:20: can't find include lib "rabbit_common/include/rabbit.hrl"include/amqp_client.hrl:21: can't find include lib "rabbit_common/include/rabbit_framing.hrl"src/amqp_gen_connection.erl:313: undefined macro 'INTERNAL_ERROR'include/amqp_client.hrl:23: record 'P_basic' undefinedsrc/amqp_gen_connection.erl:174: record amqp_error undefinedsrc/amqp_gen_connection.erl:176: record amqp_error undefinedsrc/amqp_gen_connection.erl:212: function internal_error/3 undefinedsrc/amqp_gen_connection.erl:215: record 'connection.close' undefinedsrc/amqp_gen_connection.erl:266: record 'connection.close' undefinedsrc/amqp_gen_connection.erl:273: record 'connection.close' undefinedsrc/amqp_gen_connection.erl:275: record 'connection.close_ok' undefinedsrc/amqp_gen_connection.erl:280: record 'connection.blocked' undefinedsrc/amqp_gen_connection.erl:285: record 'connection.unblocked' undefinedsrc/amqp_gen_connection.erl:291: record amqp_error undefinedsrc/amqp_gen_connection.erl:352: record 'connection.close' undefinedsrc/amqp_gen_connection.erl:355: record 'connection.close' undefinedsrc/amqp_gen_connection.erl:357: variable 'Code' is unboundsrc/amqp_gen_connection.erl:357: variable 'Text' is unboundsrc/amqp_gen_connection.erl:368: record 'connection.close_ok' undefinedsrc/amqp_gen_connection.erl:290: Warning: variable 'Other' is unused $ makerm -f deps.mkecho src/amqp_auth_mechanisms.erl:src/amqp_channel.erl:src/amqp_channels_manager.erl:src/amqp_channel_sup.erl:src/amqp_channel_sup_sup.erl:src/amqp_client.erl:src/amqp_connection.erl:src/amqp_connection_sup.erl:src/amqp_connection_type_sup.erl:src/amqp_direct_connection.erl:src/amqp_direct_consumer.erl:src/amqp_gen_connection.erl:src/amqp_gen_consumer.erl:src/amqp_main_reader.erl:src/amqp_network_connection.erl:src/amqp_rpc_client.erl:src/amqp_rpc_server.erl:src/amqp_selective_consumer.erl:src/amqp_sup.erl:src/amqp_uri.erl:src/rabbit_routing_util.erl:src/uri_parser.erl:include/amqp_client.hrl:include/amqp_client_internal.hrl:include/amqp_gen_consumer_spec.hrl:include/rabbit_routing_prefixes.hrl: | escript ../rabbitmq-server/generate_deps deps.mk ebinescript: Failed to open file: ../rabbitmq-server/generate_depsmake: *** No rule to make target `deps.mk', needed by `ebin/amqp_auth_mechanisms.beam'. Stop. Besides, in the README file, the instruction tells me to do like this: $ git clone https://github.com/rabbitmq/rabbitmq-codegen.git $ git clone https://github.com/rabbitmq/rabbitmq-server.git $ git clone https://github.com/rabbitmq/rabbitmq-erlang-client.git $ cd rabbitmq-erlang-client $ make Well, it won't compile too! And I am even more confused why I need to clone the rabbit erlang client again in this case, since the README file itself is already in the client's source code directory. To make a conclusion, my trouble here is: 1. Should I use .ez archive file or source code?2. How to use .ez archive file, is it safe?3. How should I deal with the compile errors?4. Why do I need to clone another client's source code as told by the README file? I am used to put a library in "deps" folder in my project and rebar would build them altogether for me, I think that's a preferable choice, quite clear in terms of project's folder layout at least. Any advice is appreciated :) Best regardsZhiqian _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From on-your-mark@REDACTED Thu Dec 17 09:51:56 2015 From: on-your-mark@REDACTED (YuanZhiqian) Date: Thu, 17 Dec 2015 16:51:56 +0800 Subject: [erlang-questions] Use Erlang AMQP client library In-Reply-To: References: , , , Message-ID: Hi Alexey, It's interesting that I've just tried copying rebar.config from the rebar-friendly fork to the canonical repository, what I intended is to tell rebar to find the correct dependency, and then I ran rebar get-deps && rebar compile, everything worked perfect. So I guess there's something wrong with the erlang.mk so that it's unsuccessful to build using 'make' Best,Zhiqian Date: Wed, 16 Dec 2015 20:36:01 +0300 Subject: Re: [erlang-questions] Use Erlang AMQP client library From: binarin@REDACTED To: on-your-mark@REDACTED CC: erlang-questions@REDACTED Hi, The better place to ask this sort of questions is at rabbitmq-users mailing list. But to summarize:- Failure with rebar is an expected outcome, it's not the right build tool- Failure with source tarball and `make` is an error - I've filed an issue at https://github.com/rabbitmq/rabbitmq-erlang-client/issues/29- Failure to build from github checkout with `make` - couldn't reproduce it. Probably it's some problem with your build environment, but more details are needed if you want help from somebody ) Best,Alexey 2015-12-16 20:11 GMT+03:00 YuanZhiqian : Thanks Alexey:), I'll have a try tomorrow. However there seems a lot of compilation errors when building rabbitmq from code, do you have a clue? ???? iPhone ? 2015?12?17??01:06?Alexey Lebedeff ??? Hi, If you are using erlang.mk or rebar3 as build tool, you could just add amqp_client to your dependencies, and everything will start working automagically. If for some legacy reasons you are using rebar2, then you could use rebar-friendly fork of client at https://github.com/jbrisbin/amqp_client Best,Alexey 2015-12-16 18:09 GMT+03:00 YuanZhiqian : Hi guys, I have a frustrating time tonight trying to use rabbitmq library for erlang in my project. There're two ways provided by the official sites: .ez archive file as well as in source code. https://www.rabbitmq.com/erlang-client.html I wonder which one I should choose. I prefer to use the source code because in erlang's documentation page, it is said the loading code from archive file is an experimental feature and is not recommend, http://www.erlang.org/doc/man/code.html, besides, I'm not very sure how to use a .ez archive file. On the other hand, the source code won't compile through, I searched in google and people says that it is not wise to compile source code by one's own to use rabbitmq at present, because of some tricky problem. I have tried using rebar and make to build the source code, but both failed with complaints as folllows: $ rebar compile==> amqp_client-3.5.7-src (compile)include/amqp_client.hrl:20: can't find include lib "rabbit_common/include/rabbit.hrl"include/amqp_client.hrl:21: can't find include lib "rabbit_common/include/rabbit_framing.hrl"src/amqp_gen_connection.erl:313: undefined macro 'INTERNAL_ERROR'include/amqp_client.hrl:23: record 'P_basic' undefinedsrc/amqp_gen_connection.erl:174: record amqp_error undefinedsrc/amqp_gen_connection.erl:176: record amqp_error undefinedsrc/amqp_gen_connection.erl:212: function internal_error/3 undefinedsrc/amqp_gen_connection.erl:215: record 'connection.close' undefinedsrc/amqp_gen_connection.erl:266: record 'connection.close' undefinedsrc/amqp_gen_connection.erl:273: record 'connection.close' undefinedsrc/amqp_gen_connection.erl:275: record 'connection.close_ok' undefinedsrc/amqp_gen_connection.erl:280: record 'connection.blocked' undefinedsrc/amqp_gen_connection.erl:285: record 'connection.unblocked' undefinedsrc/amqp_gen_connection.erl:291: record amqp_error undefinedsrc/amqp_gen_connection.erl:352: record 'connection.close' undefinedsrc/amqp_gen_connection.erl:355: record 'connection.close' undefinedsrc/amqp_gen_connection.erl:357: variable 'Code' is unboundsrc/amqp_gen_connection.erl:357: variable 'Text' is unboundsrc/amqp_gen_connection.erl:368: record 'connection.close_ok' undefinedsrc/amqp_gen_connection.erl:290: Warning: variable 'Other' is unused $ makerm -f deps.mkecho src/amqp_auth_mechanisms.erl:src/amqp_channel.erl:src/amqp_channels_manager.erl:src/amqp_channel_sup.erl:src/amqp_channel_sup_sup.erl:src/amqp_client.erl:src/amqp_connection.erl:src/amqp_connection_sup.erl:src/amqp_connection_type_sup.erl:src/amqp_direct_connection.erl:src/amqp_direct_consumer.erl:src/amqp_gen_connection.erl:src/amqp_gen_consumer.erl:src/amqp_main_reader.erl:src/amqp_network_connection.erl:src/amqp_rpc_client.erl:src/amqp_rpc_server.erl:src/amqp_selective_consumer.erl:src/amqp_sup.erl:src/amqp_uri.erl:src/rabbit_routing_util.erl:src/uri_parser.erl:include/amqp_client.hrl:include/amqp_client_internal.hrl:include/amqp_gen_consumer_spec.hrl:include/rabbit_routing_prefixes.hrl: | escript ../rabbitmq-server/generate_deps deps.mk ebinescript: Failed to open file: ../rabbitmq-server/generate_depsmake: *** No rule to make target `deps.mk', needed by `ebin/amqp_auth_mechanisms.beam'. Stop. Besides, in the README file, the instruction tells me to do like this: $ git clone https://github.com/rabbitmq/rabbitmq-codegen.git $ git clone https://github.com/rabbitmq/rabbitmq-server.git $ git clone https://github.com/rabbitmq/rabbitmq-erlang-client.git $ cd rabbitmq-erlang-client $ make Well, it won't compile too! And I am even more confused why I need to clone the rabbit erlang client again in this case, since the README file itself is already in the client's source code directory. To make a conclusion, my trouble here is: 1. Should I use .ez archive file or source code?2. How to use .ez archive file, is it safe?3. How should I deal with the compile errors?4. Why do I need to clone another client's source code as told by the README file? I am used to put a library in "deps" folder in my project and rebar would build them altogether for me, I think that's a preferable choice, quite clear in terms of project's folder layout at least. Any advice is appreciated :) Best regardsZhiqian _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From essen@REDACTED Thu Dec 17 10:12:14 2015 From: essen@REDACTED (=?UTF-8?Q?Lo=c3=afc_Hoguin?=) Date: Thu, 17 Dec 2015 10:12:14 +0100 Subject: [erlang-questions] Use Erlang AMQP client library In-Reply-To: References: Message-ID: <56727C6E.6050304@ninenines.eu> 3.5.7 is the old build system, not Erlang.mk. It requires you to fetch extra repositories manually, from what I understand. Erlang.mk fetches everything automatically. On 12/17/2015 09:51 AM, YuanZhiqian wrote: > Hi Alexey, > > It's interesting that I've just tried copying rebar.config from > the rebar-friendly fork to the canonical repository, what I intended is > to tell rebar to find the correct dependency, and then I ran rebar > get-deps && rebar compile, everything worked perfect. > > So I guess there's something wrong with the erlang.mk so that it's > unsuccessful to build using 'make' > > Best, > Zhiqian > > ------------------------------------------------------------------------ > Date: Wed, 16 Dec 2015 20:36:01 +0300 > Subject: Re: [erlang-questions] Use Erlang AMQP client library > From: binarin@REDACTED > To: on-your-mark@REDACTED > CC: erlang-questions@REDACTED > > Hi, > > The better place to ask this sort of questions is at rabbitmq-users > mailing list. > > But to summarize: > - Failure with rebar is an expected outcome, it's not the right build tool > - Failure with source tarball and `make` is an error - I've filed an > issue at https://github.com/rabbitmq/rabbitmq-erlang-client/issues/29 > - Failure to build from github checkout with `make` - couldn't reproduce > it. Probably it's some problem with your build environment, but more > details are needed if you want help from somebody ) > > Best, > Alexey > > 2015-12-16 20:11 GMT+03:00 YuanZhiqian >: > > Thanks Alexey:), I'll have a try tomorrow. However there seems a lot > of compilation errors when building rabbitmq from code, do you have > a clue? > > ???? iPhone > > ? 2015?12?17??01:06?Alexey Lebedeff > ??? > > Hi, > > If you are using erlang.mk or rebar3 as build > tool, you could just add amqp_client to your dependencies, and > everything will start working automagically. > > If for some legacy reasons you are using rebar2, then you could > use rebar-friendly fork of client at > https://github.com/jbrisbin/amqp_client > > > Best, > Alexey > > 2015-12-16 18:09 GMT+03:00 YuanZhiqian >: > > Hi guys, > > I have a frustrating time tonight trying to use rabbitmq > library for erlang in my project. There're two ways provided > by the official sites: .ez archive file as well as in source > code. https://www.rabbitmq.com/erlang-client.html > > I wonder which one I should choose. I prefer to use the > source code because in erlang's documentation page, it is > said the loading code from archive file is an experimental > feature and is not recommend, > http://www.erlang.org/doc/man/code.html, besides, I'm not > very sure how to use a .ez archive file. > > On the other hand, the source code won't compile through, > I searched in google and people says that it is not wise to > compile source code by one's own to use rabbitmq at present, > because of some tricky problem. > > I have tried using rebar and make to build the source > code, but both failed with complaints as folllows: > > > $ rebar compile > ==> amqp_client-3.5.7-src (compile) > include/amqp_client.hrl:20: can't find include lib > "rabbit_common/include/rabbit.hrl" > include/amqp_client.hrl:21: can't find include lib > "rabbit_common/include/rabbit_framing.hrl" > src/amqp_gen_connection.erl:313: undefined macro > 'INTERNAL_ERROR' > include/amqp_client.hrl:23: record 'P_basic' undefined > src/amqp_gen_connection.erl:174: record amqp_error undefined > src/amqp_gen_connection.erl:176: record amqp_error undefined > src/amqp_gen_connection.erl:212: function internal_error/3 > undefined > src/amqp_gen_connection.erl:215: record 'connection.close' > undefined > src/amqp_gen_connection.erl:266: record 'connection.close' > undefined > src/amqp_gen_connection.erl:273: record 'connection.close' > undefined > src/amqp_gen_connection.erl:275: record > 'connection.close_ok' undefined > src/amqp_gen_connection.erl:280: record 'connection.blocked' > undefined > src/amqp_gen_connection.erl:285: record > 'connection.unblocked' undefined > src/amqp_gen_connection.erl:291: record amqp_error undefined > src/amqp_gen_connection.erl:352: record 'connection.close' > undefined > src/amqp_gen_connection.erl:355: record 'connection.close' > undefined > src/amqp_gen_connection.erl:357: variable 'Code' is unbound > src/amqp_gen_connection.erl:357: variable 'Text' is unbound > src/amqp_gen_connection.erl:368: record > 'connection.close_ok' undefined > src/amqp_gen_connection.erl:290: Warning: variable 'Other' > is unused > > > > $ make > rm -f deps.mk > echo > src/amqp_auth_mechanisms.erl:src/amqp_channel.erl:src/amqp_channels_manager.erl:src/amqp_channel_sup.erl:src/amqp_channel_sup_sup.erl:src/amqp_client.erl:src/amqp_connection.erl:src/amqp_connection_sup.erl:src/amqp_connection_type_sup.erl:src/amqp_direct_connection.erl:src/amqp_direct_consumer.erl:src/amqp_gen_connection.erl:src/amqp_gen_consumer.erl:src/amqp_main_reader.erl:src/amqp_network_connection.erl:src/amqp_rpc_client.erl:src/amqp_rpc_server.erl:src/amqp_selective_consumer.erl:src/amqp_sup.erl:src/amqp_uri.erl:src/rabbit_routing_util.erl:src/uri_parser.erl:include/amqp_client.hrl:include/amqp_client_internal.hrl:include/amqp_gen_consumer_spec.hrl:include/rabbit_routing_prefixes.hrl: > | escript ../rabbitmq-server/generate_deps deps.mk > ebin > escript: Failed to open file: ../rabbitmq-server/generate_deps > make: *** No rule to make target `deps.mk ', > needed by `ebin/amqp_auth_mechanisms.beam'. Stop. > > > > Besides, in the README file, the instruction tells me to do > like this: > > > $ git clone https://github.com/rabbitmq/rabbitmq-codegen.git > $ git clone https://github.com/rabbitmq/rabbitmq-server.git > $ git clone > https://github.com/rabbitmq/rabbitmq-erlang-client.git > $ cd rabbitmq-erlang-client > $ make > > Well, it won't compile too! And I am even more confused > why I need to clone the rabbit erlang client again in this > case, since the README file itself is already in the > client's source code directory. > > > To make a conclusion, my trouble here is: > > 1. Should I use .ez archive file or source code? > 2. How to use .ez archive file, is it safe? > 3. How should I deal with the compile errors? > 4. Why do I need to clone another client's source code as > told by the README file? > > I am used to put a library in "deps" folder in my project > and rebar would build them altogether for me, I think that's > a preferable choice, quite clear in terms of project's > folder layout at least. > > > Any advice is appreciated :) > > Best regards > Zhiqian > > > > > > > > > > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -- Lo?c Hoguin http://ninenines.eu Author of The Erlanger Playbook, A book about software development using Erlang From chandrashekhar.mullaparthi@REDACTED Thu Dec 17 11:19:02 2015 From: chandrashekhar.mullaparthi@REDACTED (Chandru) Date: Thu, 17 Dec 2015 10:19:02 +0000 Subject: [erlang-questions] Deploying multiple webapps In-Reply-To: <20151216202509.GM26782@carfax.org.uk> References: <20151215234147.GK26782@carfax.org.uk> <20151216001028.GL26782@carfax.org.uk> <20151216202509.GM26782@carfax.org.uk> Message-ID: On 16 December 2015 at 20:25, Hugo Mills wrote: > Thanks for the reply. > > On Wed, Dec 16, 2015 at 12:17:50AM +0000, Chandru wrote: > > On 16 December 2015 at 00:10, Hugo Mills wrote: > >> - You can have a global cowboy handler (one module which is used in all > > your backend erlang nodes) which provides internal routing for all your > > services. At its most basic form, routing in cowboy is redirecting > requests > > based on URL to a module. So you just have to make sure this module is > > common across all your erlang nodes, regardless of how you distribute > your > > services across nodes. > > So I make sure that I have a 'common_routing' (or whatever) module > in each service -- do those have their own router (for the internal > routing within each service) in each one? > Your common routing can be put in a separate application. So your list of applications in your .rel file would be something like ... cowboy common_routing service_1 service_2 ... > > I guess that's something that Lo?c's work with Cowboy and RabbitMQ > would help with. > > > What am I missing here? > > It's probably what I'm missing... > > How do I handle deploying this? I really don't want to be building > a different release for each configuration, so how do I enable/disable > the different services in each separate deployment? How do I make sure > that the unused services don't run when the release is run, and don't > have routing entries in the top-level router? (Ideally, not shipping > the unused code at all, but if I'm building a single release, I guess > that's not going to be possible). > One way is to ship the same release with all your services bundled in, but turn services on/off based on configuration. You specify in a common sys.config which nodes should have which services. Ship all your services in each node, and at startup they check if they should be running on the local node. If not, they just stay disabled. Similarly, the common_routing application can check configuration to see which handlers should be activated in each node. If you don't want certain services to start at all if they are not supposed to run on a given node, you can develop an application whose sole job is to start other applications. It can examine the configuration and start which ever applications are relevant to the given node. Your release file will then have a specification such as ... {cowboy, "cowboy_version"} {common_routing, "common_routing_version"}, {app_loader, "app_loader_version"}, {service_1, "service_1_version", load} {service_2, "service_2_version", load} ... This means beam will only load the modules at startup. Your app_loader application on startup can examine its config and start whichever services need to be started. If you do not use the '-embedded' option to al when starting up your node, you can even introduce new applications into the node without restarting it. I used this in production quite successfully in my previous role. It cut the time to introduce new services into production drastically. cheers, Chandru > > > > > > > On 15 December 2015 at 23:41, Hugo Mills wrote: > > > > > > > > > I've got a collection of small services, with minimal coupling > > > > > between the back ends of those services (orchestration is done > mostly > > > > > client-side). I'd like to put an HTTPS interface in front of each > one > > > > > -- say, with cowboy. > > > > > > > > > > What I'd also like to be able to do, at least in principle, is > > > > > deploy some arbitrary subset of those services on each machine in > my > > > > > (comedically-named) server farm. I'd like to be able to do this > with > > > > > one TLS configuration, and preferably under a single port. > > > > > > > > > > i.e., access my services through > > > > > > > > > > https://server.me/service1/... > > > > > https://server.me/service2/... > > > > > https://server.me/service3/... > > > > > > > > > > Now, in python-land, which is largely where I come from, I'd > set up > > > > > Apache with mod-wsgi, and deploy each WSGI app to a specific URL > > > > > within the same URL namespace. I'm not quite sure how to do that > > > > > easily with erlang+cowboy, because there seems to be no easy way of > > > > > treating a webapp as a unit within a larger server configuration. I > > > > > keep coming to one of two approaches: > > > > > > > > > > 1) Write each service completely independently (as HTTP), run it > on a > > > > > distinct port, and splice together the URL namespaces through a > > > > > reverse proxy on a "normal" web server like Apache. > > > > > > > > > > 2) Find some way to automatically write a top-level router for > cowboy, > > > > > for each set of services that I want to deploy to a machine. > > > > > > > > > > I don't much like option 1, but I like option 2 even less. I > guess > > > > > I could write some kind of "top-level" app that, given a bunch of > > > > > webapp modules (via a configuration file of some kind), gets a > router > > > > > for each module and transforms those routers into a single router > > > > > config. Does such a thing already exist? > > > > > > > > > > It all just feels a bit awkward, and I feel like I'm missing > > > > > something. What do other people do to put together this kind of > setup? > > > > > > > > > > Hugo. > > > > > > > > > > -- > Hugo Mills | Beware geeks bearing GIFs > hugo@REDACTED carfax.org.uk | > http://carfax.org.uk/ | > PGP: E2AB1DE4 | > -------------- next part -------------- An HTML attachment was scrubbed... URL: From binarin@REDACTED Thu Dec 17 11:31:22 2015 From: binarin@REDACTED (Alexey Lebedeff) Date: Thu, 17 Dec 2015 13:31:22 +0300 Subject: [erlang-questions] Strange interaction between Docker and Erlangs ports (exit_status lost) In-Reply-To: References: Message-ID: Hi, Ah, docker at its best ) $ for iter in $(seq 1 100); do echo -n "$iter " 1>&2 ; docker run --rm edevil/docker-erlang-bug bash -c "sleep 1; erl -noshell -s test run -s init stop" 2>/dev/null; done | sort | uniq -c 100 SUCCESS but for iter in $(seq 1 100); do echo -n "$iter " 1>&2 ; docker run --rm edevil/docker-erlang-bug erl -noshell -s test run -s init stop 2>/dev/null; done | sort | uniq -c 12 FAILED 88 SUCCESS So you should either use bash/sleep trick or try find a bug in docker. Honestly, I just gave up ) Especially given that it's not very convinient to use erlang distribution inside docker containers without something like weavedns. Best, Alexey 2015-12-16 14:47 GMT+03:00 Andr? Cruz : > Hello. > > I've run into a strange problem when using Erlang and Docker. I have a > small PoC that uses open_port to launch an "ls" command and, under normal > circumstances, I get the exit_status message and the program terminates. > However, if I run the command directly via "docker run", it seems the > message is lost. The "ls" command is executed and terminates, but I don't > get the "exit_status" message. > > This issue seems related to an old message to the list: > http://erlang.org/pipermail/erlang-questions/2013-September/075385.html > > The relevant code and Dockerfile can be found here: > https://github.com/edevil/docker-erlang-bug > The built image: https://hub.docker.com/r/edevil/docker-erlang-bug/ > Docker issue: > https://github.com/docker/docker/issues/8910#issuecomment-165072444 > > Now, I don't even know if this is a Docker issue, or an Erlang issue, or > neither. Can someone shed some light on this issue? > > Thank you and best regards, > Andr? Cruz > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bchesneau@REDACTED Thu Dec 17 12:34:16 2015 From: bchesneau@REDACTED (Benoit Chesneau) Date: Thu, 17 Dec 2015 11:34:16 +0000 Subject: [erlang-questions] ets:safe_fixtable/2 & ets:tab2file/{2, 3} question Message-ID: Reading the doc I see that for tables of the ordered_set type, safe_fixtable/2 is not necessary as calls to first/1 and next/2 will always succeed. But what happen when I use `ets:tab2file/2` while keys are continuously added at the end? When does it stop? Looking at the source code it seems that it is batching the first 100 keys to the log file and then run a select until the end: https://github.com/erlang/otp/blob/maint/lib/stdlib/src/ets.erl#L871-L876 It looks like it can enter in a continuous loop. What would be the safe way to make a dump of the ets table without entering in a continuous loop apart locking when reading? Is there any solution around taking care about that already? - beno?t -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.wolf23@REDACTED Thu Dec 17 10:39:22 2015 From: paul.wolf23@REDACTED (Paul Wolf) Date: Thu, 17 Dec 2015 10:39:22 +0100 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program In-Reply-To: References: <1838048.PgCCDcU252@changa> Message-ID: Hi guys, sorry for my late answer. I am very thankful for you engagement and your answers, especially craig. I planned on working through them properly before giving an answer, but I felt that this might take some time and I don't want to withhold my response for so long. Some things I want to emphasize: - I understood from the beginning what (in theory) the problem with my program was. I intentionally (and I hope that was clear from the beginning) didn't try to solve it, because I feared that I solve those thing the wrong and not idiomatic way (because of the lack of my Erlang experience). I take that you're suggestions are the way to go. - The main issues I anticipated were (as correctly identified by you as well): (a) problematic tail recursion and (b) log(k^n) runtime (b) for n>working units. Also the memory I think would be bad, because k^n lists are build, if i am not mistaken. (c) You mentioned something about refactoring. I am not 100% sure that this is viable in terms of readability and performance. The fact that remove/3 isn't used rather makes me think, that I have a bug somewhere in the logic. (d) The thing with counting down instead of up, I havent properly understood yet, but I will have a proper look into that. - Please excuse, that I didn't have the time to work through your great answers yet in all detail - This was primarily a learning excercise for me. I could have programmed the same thing in Java in linear runtime, probably with a similiar effort. As it seems, there is quite a lot to learn for me ;) - Regarding (b): You suggested to cache results. This would be exactly my approach, but as I stated before, I was worried, this is not the right thing to do in a functional language: I was hoping/expecting Erlang to might cache the results itself. My reasoning here is (more abstract in terms of functional languages general and not Erlang in specific) that functions are side effect free and two function calls with the same parameter always returns the same result. Therefore I sensed a possibility for optimisation Erlang doesn't seem to do. Do you know how other functional languages behave in this regard? - Regarding the macro/functions constant flavours: This is a discussion you can have in many languages I guess and I don't feel to interested in it. What sparked my interest however are the benchmarks by Technion. Function performance might be "good enough", but I was suprised, that the Erlang compiler has a real difference in runtime. I was thinking, that my very simple constant functions would be the most trivial thing to optimize away for the compiler. I have the overall impression by now that the Erlang compiler optimisations are not very strong. Can you support/oppose that impression? Also regarding the same topic: I understand, that functions in guards are not allowed, because they might have side-effects. But I also understand, that the compiler (in theory) could check wether a function has sideeffects or not? Thank you again for all you're great answers so far. To be honest, I was kind of suprised/overwhelmed by quality and quantity ;) 2015-12-17 3:19 GMT+01:00 Richard A. O'Keefe : > On the subject of macros vs functions for constants, > there is a third way. > (There should be a fourth way. Abstract patterns can > do *so* many nice things...) > For a constant that you don't want to use in a pattern > or guard, you can declare a function and then tell the > compiler to -inline it. > > My advice would be: > - If you want to name something that must appear > in a pattern or guard, you have no choice but to > use a macro. (Until abstract patterns are > implemented.) > - Otherwise, always start by using plain functions. > Get the code working, then benchmark and profile it. > + If performance of these functions is OK, stop worrying. > + Try inlining and benchmark+profile again. > +++ It's generally better to try eliminating a calculation > than to micro-optimise it. > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From andre@REDACTED Thu Dec 17 12:43:37 2015 From: andre@REDACTED (=?utf-8?Q?Andr=C3=A9_Cruz?=) Date: Thu, 17 Dec 2015 11:43:37 +0000 Subject: [erlang-questions] Strange interaction between Docker and Erlangs ports (exit_status lost) In-Reply-To: References: Message-ID: <6A7A8460-098C-4476-8EB7-4766C3BBD80F@cabine.org> On 17 Dec 2015, at 00:59, Nathaniel Waisbrot wrote: > > Does adding a `-t` flag to `docker run` (to give it a TTY) make any difference? No difference. > How long have you tried waiting? If it hangs forever, use `docker exec` to connect to the container and see if it's executing. Better: use `touch` instead of `ls` and then you can see if the process has run and exited. The "exit_status" message never arrives. I've waited 1 day. :) The external command has run and finished, I can ssh into the machine and see that. The external process is defunct (zombie). However, the port still seems to be active on the beam vm, at least from what I could gather from the "proc info". It seems the beam vm is not aware the program finished, does it use SIGCHLD? Can this be related? http://erlang.org/pipermail/erlang-questions/2015-October/086590.html If the processes are put in different process sessions can the parent not receive the signal? > Have you tried other commands? How about `echo`? No change. > Have you tried giving a full path (`/bin/ls`)? No change. Andr? From andre@REDACTED Thu Dec 17 12:47:27 2015 From: andre@REDACTED (=?utf-8?Q?Andr=C3=A9_Cruz?=) Date: Thu, 17 Dec 2015 11:47:27 +0000 Subject: [erlang-questions] Strange interaction between Docker and Erlangs ports (exit_status lost) In-Reply-To: References: Message-ID: <1A27991C-1789-405C-86BE-997E75C511EA@cabine.org> On 17 Dec 2015, at 10:31, Alexey Lebedeff wrote: > > Ah, docker at its best ) > > $ for iter in $(seq 1 100); do echo -n "$iter " 1>&2 ; docker run --rm edevil/docker-erlang-bug bash -c "sleep 1; erl -noshell -s test run -s init stop" 2>/dev/null; done | sort | uniq -c > 100 SUCCESS > > but > > for iter in $(seq 1 100); do echo -n "$iter " 1>&2 ; docker run --rm edevil/docker-erlang-bug erl -noshell -s test run -s init stop 2>/dev/null; done | sort | uniq -c > 12 FAILED > 88 SUCCESS > > So you should either use bash/sleep trick or try find a bug in docker. Honestly, I just gave up ) Especially given that it's not very convinient to use erlang distribution inside docker containers without something like weavedns. There are some subtle changes that somehow mitigate the problem, for example: $ docker run edevil/docker-erlang-bug bash -c "erl -noshell -s test run -s init stop 1>&1" SUCCESS Notice the strange stdout redirect. Without it: $ docker run edevil/docker-erlang-bug bash -c "erl -noshell -s test run -s init stop" FAILED It seems to me that the Erlang port is not aware that the external command has completed. Can we be sure this is a Docker problem and not some incorrect assumption by the Beam VM about its environment? This recent e-mail http://erlang.org/pipermail/erlang-questions/2015-October/086590.html talks about launched processes being on another process session, can this be related? Andr? From zxq9@REDACTED Thu Dec 17 14:27:08 2015 From: zxq9@REDACTED (zxq9) Date: Thu, 17 Dec 2015 22:27:08 +0900 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program In-Reply-To: References: Message-ID: <4309580.Gyv4PvKBSB@changa> Hi, Paul. On 2015?12?17? ??? 10:39:22 Paul Wolf wrote: > - I understood from the beginning what (in theory) the problem with my > program was. I intentionally (and I hope that was clear from the > beginning) didn't try to solve it, because I feared that I solve those > thing the wrong and not idiomatic way (because of the lack of my > Erlang experience). I take that you're suggestions are the way to go. Performance isn't everything -- its only a problem once you're *sure* it is. With a bijective function like this with a limited range of values that will ever be needed in production anyway, it really wouldn't matter how inefficient it was because you would only execute it one time and record the results in a table. Sometimes readability can conflict with performance. But computers will continue to get faster (in terms of clock speed, bus speed, per-clock efficiency, cycles/money cost (the *real* one you care about), etc.). In this case time that passes in execution is temporarily your enemy, but the time that passes in your life is your friend. On the contrary, performance tricks ALWAYS conflict with readability. And you WILL forget what you were thinking as time passes, so when the day comes that you want a readable version because computers have become faster you will get to enjoy rewriting all those nifty, confusing little hacks out of your code later. In this case time that passes in execution is still your enemy and you spend time in your life once fighting it, but time that passes in your life is now also your enemy because your memory of the difference between the readable code and the "performant" code will fade. (Unless Management is paying you fat in any case and directs you to geek out on performance -- in that case have fun geeking out on screaming performance. Sometimes this is the right attitude, just not nearly as often as Management typically leads itself to believe.) The worst thing you can do is not write something that works at all because you are trying from the outset to write something perfect. With this in mind your code was a fine first version -- and it indeed did uncover what is either a bug in one important function *or* an obvious way to completely avoid a computation entirely (which is way better). So whatever. > - The main issues I anticipated were (as correctly identified by you as well): > (a) problematic tail recursion and Use accumulators. This isn't just "an optimization" -- this is also a good way to *understand* what is happening within your functions, because you can check the accumulator value at any point, whereas a non-tail recursive function will not be able to report a value to you until the whole computation is rolled up again -- and if the function is crashing somewhere this could be anywhere (and that could mean sifting through a whopper of a stack trace, considering how HUUUUGE the stack might be). Crashy values often occur at edge cases, and edge cases are often close to or the same as your base case. This can make debugging a super annoying puzzle. In Erlang (and most other functional languages) tail-recursion is idiomatic, simple recursion is not. > (b) log(k^n) runtime (b) for n>working units. Also the memory I think > would be bad, because k^n lists are build, if i am not mistaken. The memory is exploding because its leaving a bunch of unfinished computations lying around, pending the next result which itself is going to wait around for more dependent results, etc. Since many of the original functions did things like rebuild Pensions where Pensions :: [{Value, Time}] by calling pensions(T) instead of appending new *relevant* pension values of T over time, there were a bajillion versions of Pensions temporarily in memory, and that gets even crazier whenever the expenses/1 part of tax/2 is called when T > 60. > (c) You mentioned something about refactoring. I am not 100% sure that > this is viable in terms of readability and performance. The fact that > remove/3 isn't used rather makes me think, that I have a bug somewhere > in the logic. This is *probably* a bug. But don't just take remove/3 as some abstract function when you test it. Test it first for correctness as an abstract function (based on its spec, it should always give correct answers regardless the ordering or values given as input) -- but *then* you should test it as a function over your *actual* values of Pensions, because those will only grow at a specific rate based on your constants, so you can know exactly what values will ever be passed to remove/3 with *those* specific constants. If remove/3 was correct per spec, and it turns out Pensions really will never match its alternate clause, then you can comment out remove/3 and its calling location and replace it with direct logic that shortcuts that operation entirely. (Do NOT delete it, though. Your constants may change someday and it would be silly to re-write it for no reason if it was already correct.) It is surprising how often computations that are central to an algorithm across its entire domain can be skipped entirely within the range of actual values that can possibly be passed as arguments to it. This comes up in graphics, all sorts of game calculations (movement, speed, distance, damage, auction prices, etc.), pathfinding, and sometimes finance. So do it properly, but keep an open mind. Skipping computations entirely is the Arch King of Speed Hacks. > (d) The thing with counting down instead of up, I havent properly > understood yet, but I will have a proper look into that. You are calculating a compound value over time. Sort of like Fibonacci. (I hate fib examples because they are almost always inapplicable to anything -- but in this case, holy crap, it actually is a good toy example!) Any fib(N) value depends on all the fib(N-1) values until fib(1). That means that to get *any* fib(N) you have to compute all the underlying ones already. Counting DOWN instead of UP means you are going to calculate *every* value that supports a given fib(N) a total of N-1 times FOR EACH fib(N - 1), and this means its N! times. If you are going to calculate them once, why not be done with it the first time? I can't explain it any better than an example: -module(fibses). -export([fib_desc/1, fib_asc/1, fib_asc/4]). %% Simple recursive, descending definition. fib_desc(N) -> ok = io:format("Calling: fib_desc(~tp)~n", [N]), fib_d(N). fib_d(0) -> 0; fib_d(1) -> 1; fib_d(N) -> fib_desc(N - 1) + fib_desc(N - 2). %%% Tail-recursive, ascending definition. fib_asc(N) -> ok = io:format("Calling: fib_asc(~tp)~n", [N]), fib_a(N). fib_a(0) -> 0; fib_a(1) -> 1; fib_a(N) -> fib_asc(N, 2, fib_asc(0), fib_asc(1)). fib_asc(N, Z, A, B) -> ok = io:format("Calling: fib_asc(~tp, ~tp, ~tp, ~tp)~n", [N, Z, A, B]), fib_a(N, Z, A, B). fib_a(N, N, A, B) -> A + B; fib_a(N, Z, A, B) -> fib_asc(N, Z + 1, B, A + B). The way we expect the function to work in our minds is like fib_asc/1: 1> fibses:fib_asc(6). Calling: fib_asc(6) Calling: fib_asc(0) Calling: fib_asc(1) Calling: fib_asc(6, 2, 0, 1) Calling: fib_asc(6, 3, 1, 1) Calling: fib_asc(6, 4, 1, 2) Calling: fib_asc(6, 5, 2, 3) Calling: fib_asc(6, 6, 3, 5) 8 But the naive definition EXPLOOOOODES! 2> fibses:fib_desc(6). Calling: fib_desc(6) Calling: fib_desc(5) Calling: fib_desc(4) Calling: fib_desc(3) Calling: fib_desc(2) Calling: fib_desc(1) Calling: fib_desc(0) Calling: fib_desc(1) Calling: fib_desc(2) Calling: fib_desc(1) Calling: fib_desc(0) Calling: fib_desc(3) Calling: fib_desc(2) Calling: fib_desc(1) Calling: fib_desc(0) Calling: fib_desc(1) Calling: fib_desc(4) Calling: fib_desc(3) Calling: fib_desc(2) Calling: fib_desc(1) Calling: fib_desc(0) Calling: fib_desc(1) Calling: fib_desc(2) Calling: fib_desc(1) Calling: fib_desc(0) 8 OUCH! When people say they want to parallelize fib(), this is what they wind up doing in a bunch of processes because they don't quite understand why this function is a horrible candidate for parallelization. Sure, you can run each of these in a separate process, but everything else depends on the same set of values over and over anyway, so nothing runs faster and you occupy every drop of processing muscle you have DOING USELESS WORK YOU ALREADY DID AND THEN ZOMG WHAT ARE YOU DOING DO YOU *ENJOY* GLOBAL WARMING WTF?!?!? AHHH! This is also why I hate most node.js code I've ever read. I'm not going to paste it here (people are unbelievably patient with me on this list as it is!) but try calling fib_desc/1 with any value over 8. (Actually, this version that wraps each call in an output function really gives me a better idea why Fibonacci was fascinated with this phenomenon.) Now, as for memory consumption, consider that *each* one of those needless calls to fib_d/1 in that module has to wait in memory, waiting for its tree of dependent results. The rate at which the *function* explodes makes the storage size of each value carried over the stack meaningless in terms of memory optimization -- the number of calls explodes so fast that even stack frames full of [reference (int)] can eat gigabytes before you can blink. > - Regarding (b): You suggested to cache results. This would be exactly > my approach, but as I stated before, I was worried, this is not the > right thing to do in a functional language: I was hoping/expecting > Erlang to might cache the results itself. My reasoning here is (more > abstract in terms of functional languages general and not Erlang in > specific) that functions are side effect free and two function calls > with the same parameter always returns the same result. Therefore I > sensed a possibility for optimisation Erlang doesn't seem to do. Do > you know how other functional languages behave in this regard? It depends. This is called "memoization" and is a common technique, but not many language runtimes memoize results for you without you explicitly asking it to, or implementing it yourself (and any language I've used to write actual production software requires you to implement the memoization yourself, its too hard to guess what makes sense to keep around -- this would require *actually* magical GC). You have to be careful in functional languages written for imperative language runtimes and in multi-paradigm languages that can be used for FP that didn't anticipate functional programming. Generally problematic recursion in Scala and recursion depth limits in Python, for example, can catch trip you up, and that's a simple problem. Lots of languages that are just sorta-functional will manifest surprisingly weird processes from innocent looking function definitions if you're not careful. Erlang refreshingly straightforward and simple by comparison. The general attitude here toward bottlenecks is NOT to make them more performant or sit around praying for compiler or runtime optimizations, but to instead AVOID HAVING BOTTLENECKS to begin with. Also recognize that there are simply some places Erlang is not the proper tool. Writing some number cruncher in a language tailored for that use case and talking to it over a socket can be the super speed hack you are looking for sometimes. > - Regarding the macro/functions constant flavours: This is a > discussion you can have in many languages I guess and I don't feel to > interested in it. What sparked my interest however are the benchmarks > by Technion. Function performance might be "good enough", but I was > suprised, that the Erlang compiler has a real difference in runtime. I > was thinking, that my very simple constant functions would be the most > trivial thing to optimize away for the compiler. I have the overall > impression by now that the Erlang compiler optimisations are not very > strong. Can you support/oppose that impression? Also regarding the > same topic: I understand, that functions in guards are not allowed, > because they might have side-effects. But I also understand, that the > compiler (in theory) could check wether a function has sideeffects or > not? This exact optimization exists as an option to the compiler. (This is what Richard was referring to in his response.) But inlining like this is NOT the default. The default is to build a non-magical Erlang assembly of the program you wrote. Play with compiler options -P, -E, and -S to see the process yourself -- you can learn a lot about the way things work this way. The VM would be rather less accessible (to humans) if it defaulted to doing lots of neat little tricks. I think this is a fair tradeoff. The version I wrote doesn't really benefit from using macros over functions for your constants anyway -- because *most* of the values wind up in function arguments after the first iteration. -Craig From mononcqc@REDACTED Thu Dec 17 15:24:22 2015 From: mononcqc@REDACTED (Fred Hebert) Date: Thu, 17 Dec 2015 09:24:22 -0500 Subject: [erlang-questions] ets:safe_fixtable/2 & ets:tab2file/{2, 3} question In-Reply-To: References: Message-ID: <20151217142421.GB64758@fhebert-ltm2.internal.salesforce.com> On 12/17, Benoit Chesneau wrote: >But what happen when I use `ets:tab2file/2` while keys are continuously >added at the end? When does it stop? > I'm not sure what answer you expect to the question "how can I keep an infinitely growing table from taking an infinite amount of time to dump to disk" that doesn't require locking it to prevent the growth from showing up. Do note that safe_fixtable/2 does *not* prevent new inserted elements from showing up in your table -- it only prevents objects from being taken out or being iterated over twice. While it's easier to create a pathological case with an ordered_set table (keeping adding +1 as keys near the end), it is not beyond the realm of possibility to do so with other table types (probably with lots of insertions and playing with process priorities, or predictable hash sequences). I don't believe there's any way to lock a public table (other than implicit blocking in match and select functions). If I were to give a wild guess, I'd say to look at ets:info(Tab,size), and have your table-dumping process stop when it reaches the predetermined size or meets an earlier exit. This would let you bound the time it takes you to dump the table, at the cost of possibly neglecting to add information (which you would do anyway -- you would just favor older info before newer info). This would however imply reimplementing your own tab2file functionality. Regards, Fred. From bchesneau@REDACTED Thu Dec 17 17:13:29 2015 From: bchesneau@REDACTED (Benoit Chesneau) Date: Thu, 17 Dec 2015 16:13:29 +0000 Subject: [erlang-questions] ets:safe_fixtable/2 & ets:tab2file/{2, 3} question In-Reply-To: <20151217142421.GB64758@fhebert-ltm2.internal.salesforce.com> References: <20151217142421.GB64758@fhebert-ltm2.internal.salesforce.com> Message-ID: On Thu, Dec 17, 2015 at 3:24 PM Fred Hebert wrote: > On 12/17, Benoit Chesneau wrote: > >But what happen when I use `ets:tab2file/2` while keys are continuously > >added at the end? When does it stop? > > > > I'm not sure what answer you expect to the question "how can I keep an > infinitely growing table from taking an infinite amount of time to dump > to disk" that doesn't require locking it to prevent the growth from > showing up. > well by keeping a version of the data at some point :) But that's not how it works unfortunately. > > Do note that safe_fixtable/2 does *not* prevent new inserted elements > from showing up in your table -- it only prevents objects from being > taken out or being iterated over twice. While it's easier to create a > pathological case with an ordered_set table (keeping adding +1 as keys > near the end), it is not beyond the realm of possibility to do so with > other table types (probably with lots of insertions and playing with > process priorities, or predictable hash sequences). > > I don't believe there's any way to lock a public table (other than > implicit blocking in match and select functions). If I were to give a > wild guess, I'd say to look at ets:info(Tab,size), and have your > table-dumping process stop when it reaches the predetermined size or > meets an earlier exit. This would let you bound the time it takes you to > dump the table, at the cost of possibly neglecting to add information > (which you would do anyway -- you would just favor older info before > newer info). This would however imply reimplementing your own tab2file > functionality. > > Good idea, i need to think a little more about it.. I wish it could be possible to fork an ets table at some point and only use this snapshot in memory like REDIS does literally by forking the process when dumping it. That would be useful... Thanks for the answer! - benoit -------------- next part -------------- An HTML attachment was scrubbed... URL: From bchesneau@REDACTED Thu Dec 17 17:16:04 2015 From: bchesneau@REDACTED (Benoit Chesneau) Date: Thu, 17 Dec 2015 16:16:04 +0000 Subject: [erlang-questions] ets:safe_fixtable/2 & ets:tab2file/{2, 3} question In-Reply-To: References: <20151217142421.GB64758@fhebert-ltm2.internal.salesforce.com> Message-ID: On Thu, Dec 17, 2015 at 5:13 PM Benoit Chesneau wrote: > On Thu, Dec 17, 2015 at 3:24 PM Fred Hebert wrote: > >> On 12/17, Benoit Chesneau wrote: >> >But what happen when I use `ets:tab2file/2` while keys are continuously >> >added at the end? When does it stop? >> > >> >> I'm not sure what answer you expect to the question "how can I keep an >> infinitely growing table from taking an infinite amount of time to dump >> to disk" that doesn't require locking it to prevent the growth from >> showing up. >> > > well by keeping a version of the data at some point :) But that's not how > it works unfortunately. > > >> >> Do note that safe_fixtable/2 does *not* prevent new inserted elements >> from showing up in your table -- it only prevents objects from being >> taken out or being iterated over twice. While it's easier to create a >> pathological case with an ordered_set table (keeping adding +1 as keys >> near the end), it is not beyond the realm of possibility to do so with >> other table types (probably with lots of insertions and playing with >> process priorities, or predictable hash sequences). >> >> I don't believe there's any way to lock a public table (other than >> implicit blocking in match and select functions). If I were to give a >> wild guess, I'd say to look at ets:info(Tab,size), and have your >> table-dumping process stop when it reaches the predetermined size or >> meets an earlier exit. This would let you bound the time it takes you to >> dump the table, at the cost of possibly neglecting to add information >> (which you would do anyway -- you would just favor older info before >> newer info). This would however imply reimplementing your own tab2file >> functionality. >> >> > Good idea, i need to think a little more about it.. I wish it could be > possible to fork an ets table at some point and only use this snapshot in > memory like REDIS does literally by forking the process when dumping it. > That would be useful... > > Thanks for the answer! > > side note, but i am thinking that selecting keys per batch also limit the possible effects of the concurrent writes since it can work faster that way. though writing to the file is slow. - benoit -------------- next part -------------- An HTML attachment was scrubbed... URL: From felixgallo@REDACTED Thu Dec 17 17:38:10 2015 From: felixgallo@REDACTED (Felix Gallo) Date: Thu, 17 Dec 2015 08:38:10 -0800 Subject: [erlang-questions] ets:safe_fixtable/2 & ets:tab2file/{2, 3} question In-Reply-To: References: <20151217142421.GB64758@fhebert-ltm2.internal.salesforce.com> Message-ID: You can take advantage of erlang's concurrency to get arbitrarily-close-to-redis semantics. For example, redis's bgsave could be achieved by writing as usual to your ets table, but also sending a duplicate message to a gen_server whose job it is to keep up to date a second, slave, ets table. That gen_server would be the one to provide dumps (via to_dets or whatever other facility). Then if it has to pause while it dumps, its message queue grows during the duration but eventually flushes out and brings itself back up to date. Meanwhile the primary ets replica continues to be usable. It's not a silver bullet because, like redis, you would still have to worry about the pathological conditions, like dumps taking so long that the slave gen_server's queue gets out of control, or out of memory conditions, etc., etc. But if you feel like implementing paxos or waiting about 3 months, you could also generalize the gen_server so that a group of them formed a distributed cluster. F. On Thu, Dec 17, 2015 at 8:16 AM, Benoit Chesneau wrote: > > > On Thu, Dec 17, 2015 at 5:13 PM Benoit Chesneau > wrote: > >> On Thu, Dec 17, 2015 at 3:24 PM Fred Hebert wrote: >> >>> On 12/17, Benoit Chesneau wrote: >>> >But what happen when I use `ets:tab2file/2` while keys are continuously >>> >added at the end? When does it stop? >>> > >>> >>> I'm not sure what answer you expect to the question "how can I keep an >>> infinitely growing table from taking an infinite amount of time to dump >>> to disk" that doesn't require locking it to prevent the growth from >>> showing up. >>> >> >> well by keeping a version of the data at some point :) But that's not how >> it works unfortunately. >> >> >>> >>> Do note that safe_fixtable/2 does *not* prevent new inserted elements >>> from showing up in your table -- it only prevents objects from being >>> taken out or being iterated over twice. While it's easier to create a >>> pathological case with an ordered_set table (keeping adding +1 as keys >>> near the end), it is not beyond the realm of possibility to do so with >>> other table types (probably with lots of insertions and playing with >>> process priorities, or predictable hash sequences). >>> >>> I don't believe there's any way to lock a public table (other than >>> implicit blocking in match and select functions). If I were to give a >>> wild guess, I'd say to look at ets:info(Tab,size), and have your >>> table-dumping process stop when it reaches the predetermined size or >>> meets an earlier exit. This would let you bound the time it takes you to >>> dump the table, at the cost of possibly neglecting to add information >>> (which you would do anyway -- you would just favor older info before >>> newer info). This would however imply reimplementing your own tab2file >>> functionality. >>> >>> >> Good idea, i need to think a little more about it.. I wish it could be >> possible to fork an ets table at some point and only use this snapshot in >> memory like REDIS does literally by forking the process when dumping it. >> That would be useful... >> >> Thanks for the answer! >> >> > side note, but i am thinking that selecting keys per batch also limit the > possible effects of the concurrent writes since it can work faster that > way. though writing to the file is slow. > > - benoit > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From garazdawi@REDACTED Thu Dec 17 17:51:22 2015 From: garazdawi@REDACTED (Lukas Larsson) Date: Thu, 17 Dec 2015 17:51:22 +0100 Subject: [erlang-questions] Strange interaction between Docker and Erlangs ports (exit_status lost) In-Reply-To: <1A27991C-1789-405C-86BE-997E75C511EA@cabine.org> References: <1A27991C-1789-405C-86BE-997E75C511EA@cabine.org> Message-ID: Hello, I did some digging into this and it appears that some extra process (I don't know which one) sends a SIGCHLD to beam.smp which is caught and does not match any pid that the spawn driver is interested in. The spawn driver is built around the assumption that no extra SIGCHLD arrives so after receiving the extra SIGCHLD it does not go looking for more as it thinks it has gotten all it should and thus the exit_status of the ls command is never received. I changed the spawn driver to no longer assume that it is interested in each SIGCHLD but then it starts spinning like crazy over waitpid so we really have to figure out what that extra process is in order to do anything about it. For some reason strace in the docker images I built is very very broken. If someone who is better at working with docker wants to pick it up and have a look I've forked and added some things to Andr?'s repo: https://github.com/garazdawi/docker-erlang-bug and the "fix" with trace output in here: https://github.com/garazdawi/otp/tree/lukas/erts/docker-rogue-process-fix-kinda The output I get is: child sleep Signal chld waiter 23: About to execute exec inet_gethost 4 child died 23 0 child died 24 10 25: About to execute exec ls 25: ready_input read 67 child died 25 0 25: report_exit_status 0 -> 0x7f23ab9c0c2025: report_exit_status 0 -> 0x7f23ab9c0c2025: ready_input read 0 25: port_inp_failure 0 Dockerfile README.md erlang-OTP-18.2.tar.gz test.beam test.erl SUCCESS the questions is what is this child 24 that dies with status 10? It seems to be sticking together with inet_gethost, but I don't understand why it should generate extra SIGCHLDs. Lukas On Thu, Dec 17, 2015 at 12:47 PM, Andr? Cruz wrote: > On 17 Dec 2015, at 10:31, Alexey Lebedeff wrote: > > > > Ah, docker at its best ) > > > > $ for iter in $(seq 1 100); do echo -n "$iter " 1>&2 ; docker run --rm > edevil/docker-erlang-bug bash -c "sleep 1; erl -noshell -s test run -s init > stop" 2>/dev/null; done | sort | uniq -c > > 100 SUCCESS > > > > but > > > > for iter in $(seq 1 100); do echo -n "$iter " 1>&2 ; docker run --rm > edevil/docker-erlang-bug erl -noshell -s test run -s init stop 2>/dev/null; > done | sort | uniq -c > > 12 FAILED > > 88 SUCCESS > > > > So you should either use bash/sleep trick or try find a bug in docker. > Honestly, I just gave up ) Especially given that it's not very convinient > to use erlang distribution inside docker containers without something like > weavedns. > > There are some subtle changes that somehow mitigate the problem, for > example: > > $ docker run edevil/docker-erlang-bug bash -c "erl -noshell -s test run -s > init stop 1>&1" > SUCCESS > > Notice the strange stdout redirect. Without it: > > $ docker run edevil/docker-erlang-bug bash -c "erl -noshell -s test run -s > init stop" > FAILED > > It seems to me that the Erlang port is not aware that the external command > has completed. Can we be sure this is a Docker problem and not some > incorrect assumption by the Beam VM about its environment? This recent > e-mail > http://erlang.org/pipermail/erlang-questions/2015-October/086590.html > talks about launched processes being on another process session, can this > be related? > > Andr? > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andre@REDACTED Thu Dec 17 18:14:16 2015 From: andre@REDACTED (=?utf-8?Q?Andr=C3=A9_Cruz?=) Date: Thu, 17 Dec 2015 17:14:16 +0000 Subject: [erlang-questions] Strange interaction between Docker and Erlangs ports (exit_status lost) In-Reply-To: References: <1A27991C-1789-405C-86BE-997E75C511EA@cabine.org> Message-ID: <1C729710-F66E-4D10-B0F1-E2C18EDA3A29@cabine.org> Hello. First of all, thank you for your work! > On 17 Dec 2015, at 16:51, Lukas Larsson wrote: > > the questions is what is this child 24 that dies with status 10? It seems to be sticking together with inet_gethost, but I don't understand why it should generate extra SIGCHLDs. I used this tool on the docker host: http://users.suse.com/~krahmer/exec-notify.c And the output is: FORK:parent(pid,tgid)=11154,11130 child(pid,tgid)=11157,11157 [/usr/local/lib/erlang/erts-7.1/bin/beam.smp -- -root /usr/local/lib/erlang -progname erl -- -home /root -- -noshell -s test run -s init stop ] EXEC:pid=11157,tgid=11157 [Uid: 0 0 0 0] [/usr/local/lib/erlang/erts-7.1/bin/child_setup false . exec inet_gethost 4 3:524287 11:1 12:0 - ] EXEC:pid=11157,tgid=11157 [Uid: 0 0 0 0] [sh -c exec inet_gethost 4 ] EXEC:pid=11157,tgid=11157 [Uid: 0 0 0 0] [inet_gethost 4 ] FORK:parent(pid,tgid)=11157,11157 child(pid,tgid)=11158,11158 [inet_gethost 4 ] EXIT:pid=11157,11157 exit code=0 EXIT:pid=11158,11158 exit code=10 FORK:parent(pid,tgid)=11154,11130 child(pid,tgid)=11159,11159 [/usr/local/lib/erlang/erts-7.1/bin/beam.smp -- -root /usr/local/lib/erlang -progname erl -- -home /root -- -noshell -s test run -s init stop ] EXEC:pid=11159,tgid=11159 [Uid: 0 0 0 0] [/usr/local/lib/erlang/erts-7.1/bin/child_setup false . exec ls 3:524287 11:1 12:0 - ] EXEC:pid=11159,tgid=11159 [Uid: 0 0 0 0] [sh -c exec ls ] EXEC:pid=11159,tgid=11159 [Uid: 0 0 0 0] [ls ] EXIT:pid=11159,11159 exit code=0 I don't yet know why that extra process appears, but it sure seems related to inet_gethost. It appears to fork(). Andr? From eriksoe@REDACTED Thu Dec 17 19:34:02 2015 From: eriksoe@REDACTED (=?UTF-8?Q?Erik_S=C3=B8e_S=C3=B8rensen?=) Date: Thu, 17 Dec 2015 19:34:02 +0100 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program In-Reply-To: <4309580.Gyv4PvKBSB@changa> References: <4309580.Gyv4PvKBSB@changa> Message-ID: Good illustration. Fibonacci to the rescue - for once... I feel compelled to point out, however, that the talk about memory explosion is not applicable here. In Fibonacci - and, I believe, in the original program (though I didn't read it closely), the maximum stack depth, and therefore the amount of stack memory required, is only linear in n. (Well, with lists and such, it might be quadratic for the OP.) The explosion is only timewise. The accumulating, tail recursive version runs in constant space, of course. Better, but linear would still do. /Erik Den 17/12/2015 14.27 skrev "zxq9" : > > Hi, Paul. > > On 2015?12?17? ??? 10:39:22 Paul Wolf wrote: > > - I understood from the beginning what (in theory) the problem with my > > program was. I intentionally (and I hope that was clear from the > > beginning) didn't try to solve it, because I feared that I solve those > > thing the wrong and not idiomatic way (because of the lack of my > > Erlang experience). I take that you're suggestions are the way to go. > > Performance isn't everything -- its only a problem once you're *sure* it > is. With a bijective function like this with a limited range of values > that will ever be needed in production anyway, it really wouldn't matter > how inefficient it was because you would only execute it one time and > record the results in a table. > > Sometimes readability can conflict with performance. But computers > will continue to get faster (in terms of clock speed, bus speed, per-clock > efficiency, cycles/money cost (the *real* one you care about), etc.). > > In this case time that passes in execution is temporarily your enemy, but > the time that passes in your life is your friend. > > On the contrary, performance tricks ALWAYS conflict with readability. And > you WILL forget what you were thinking as time passes, so when the day comes > that you want a readable version because computers have become faster you > will get to enjoy rewriting all those nifty, confusing little hacks out of > your code later. > > In this case time that passes in execution is still your enemy and you spend > time in your life once fighting it, but time that passes in your life is now > also your enemy because your memory of the difference between the readable > code and the "performant" code will fade. (Unless Management is paying you > fat in any case and directs you to geek out on performance -- in that case > have fun geeking out on screaming performance. Sometimes this is the right > attitude, just not nearly as often as Management typically leads itself to > believe.) > > The worst thing you can do is not write something that works at all because > you are trying from the outset to write something perfect. With this in > mind your code was a fine first version -- and it indeed did uncover what > is either a bug in one important function *or* an obvious way to completely > avoid a computation entirely (which is way better). > > So whatever. > > > - The main issues I anticipated were (as correctly identified by you as well): > > (a) problematic tail recursion and > > Use accumulators. This isn't just "an optimization" -- this is also a good > way to *understand* what is happening within your functions, because you > can check the accumulator value at any point, whereas a non-tail recursive > function will not be able to report a value to you until the whole > computation is rolled up again -- and if the function is crashing somewhere > this could be anywhere (and that could mean sifting through a whopper of a > stack trace, considering how HUUUUGE the stack might be). Crashy values > often occur at edge cases, and edge cases are often close to or the same as > your base case. This can make debugging a super annoying puzzle. > > In Erlang (and most other functional languages) tail-recursion is idiomatic, > simple recursion is not. > > > (b) log(k^n) runtime (b) for n>working units. Also the memory I think > > would be bad, because k^n lists are build, if i am not mistaken. > > The memory is exploding because its leaving a bunch of unfinished > computations lying around, pending the next result which itself is going > to wait around for more dependent results, etc. > > Since many of the original functions did things like rebuild Pensions > where Pensions :: [{Value, Time}] by calling pensions(T) instead of > appending new *relevant* pension values of T over time, there were a > bajillion versions of Pensions temporarily in memory, and that gets even > crazier whenever the expenses/1 part of tax/2 is called when T > 60. > > > (c) You mentioned something about refactoring. I am not 100% sure that > > this is viable in terms of readability and performance. The fact that > > remove/3 isn't used rather makes me think, that I have a bug somewhere > > in the logic. > > This is *probably* a bug. But don't just take remove/3 as some abstract > function when you test it. Test it first for correctness as an abstract > function (based on its spec, it should always give correct answers regardless > the ordering or values given as input) -- but *then* you should test it > as a function over your *actual* values of Pensions, because those will > only grow at a specific rate based on your constants, so you can know > exactly what values will ever be passed to remove/3 with *those* specific > constants. > > If remove/3 was correct per spec, and it turns out Pensions really will > never match its alternate clause, then you can comment out remove/3 and > its calling location and replace it with direct logic that shortcuts that > operation entirely. (Do NOT delete it, though. Your constants may change > someday and it would be silly to re-write it for no reason if it was > already correct.) > > It is surprising how often computations that are central to an algorithm > across its entire domain can be skipped entirely within the range of > actual values that can possibly be passed as arguments to it. This comes > up in graphics, all sorts of game calculations (movement, speed, distance, > damage, auction prices, etc.), pathfinding, and sometimes finance. > > So do it properly, but keep an open mind. > > Skipping computations entirely is the Arch King of Speed Hacks. > > > (d) The thing with counting down instead of up, I havent properly > > understood yet, but I will have a proper look into that. > > You are calculating a compound value over time. Sort of like Fibonacci. > > (I hate fib examples because they are almost always inapplicable to > anything -- but in this case, holy crap, it actually is a good toy > example!) > > Any fib(N) value depends on all the fib(N-1) values until fib(1). That > means that to get *any* fib(N) you have to compute all the underlying > ones already. Counting DOWN instead of UP means you are going to calculate > *every* value that supports a given fib(N) a total of N-1 times FOR EACH > fib(N - 1), and this means its N! times. If you are going to calculate > them once, why not be done with it the first time? > > I can't explain it any better than an example: > > -module(fibses). > > -export([fib_desc/1, fib_asc/1, fib_asc/4]). > > %% Simple recursive, descending definition. > fib_desc(N) -> > ok = io:format("Calling: fib_desc(~tp)~n", [N]), > fib_d(N). > > fib_d(0) -> 0; > fib_d(1) -> 1; > fib_d(N) -> fib_desc(N - 1) + fib_desc(N - 2). > > > %%% Tail-recursive, ascending definition. > fib_asc(N) -> > ok = io:format("Calling: fib_asc(~tp)~n", [N]), > fib_a(N). > > fib_a(0) -> 0; > fib_a(1) -> 1; > fib_a(N) -> fib_asc(N, 2, fib_asc(0), fib_asc(1)). > > fib_asc(N, Z, A, B) -> > ok = io:format("Calling: fib_asc(~tp, ~tp, ~tp, ~tp)~n", [N, Z, A, B]), > fib_a(N, Z, A, B). > > fib_a(N, N, A, B) -> A + B; > fib_a(N, Z, A, B) -> fib_asc(N, Z + 1, B, A + B). > > > The way we expect the function to work in our minds is like fib_asc/1: > > 1> fibses:fib_asc(6). > Calling: fib_asc(6) > Calling: fib_asc(0) > Calling: fib_asc(1) > Calling: fib_asc(6, 2, 0, 1) > Calling: fib_asc(6, 3, 1, 1) > Calling: fib_asc(6, 4, 1, 2) > Calling: fib_asc(6, 5, 2, 3) > Calling: fib_asc(6, 6, 3, 5) > 8 > > But the naive definition EXPLOOOOODES! > > 2> fibses:fib_desc(6). > Calling: fib_desc(6) > Calling: fib_desc(5) > Calling: fib_desc(4) > Calling: fib_desc(3) > Calling: fib_desc(2) > Calling: fib_desc(1) > Calling: fib_desc(0) > Calling: fib_desc(1) > Calling: fib_desc(2) > Calling: fib_desc(1) > Calling: fib_desc(0) > Calling: fib_desc(3) > Calling: fib_desc(2) > Calling: fib_desc(1) > Calling: fib_desc(0) > Calling: fib_desc(1) > Calling: fib_desc(4) > Calling: fib_desc(3) > Calling: fib_desc(2) > Calling: fib_desc(1) > Calling: fib_desc(0) > Calling: fib_desc(1) > Calling: fib_desc(2) > Calling: fib_desc(1) > Calling: fib_desc(0) > 8 > > > OUCH! > > When people say they want to parallelize fib(), this is what they wind up > doing in a bunch of processes because they don't quite understand why this > function is a horrible candidate for parallelization. Sure, you can run each > of these in a separate process, but everything else depends on the same set > of values over and over anyway, so nothing runs faster and you occupy every > drop of processing muscle you have DOING USELESS WORK YOU ALREADY DID AND > THEN ZOMG WHAT ARE YOU DOING DO YOU *ENJOY* GLOBAL WARMING WTF?!?!? AHHH! > > This is also why I hate most node.js code I've ever read. > > I'm not going to paste it here (people are unbelievably patient with me on > this list as it is!) but try calling fib_desc/1 with any value over 8. > > (Actually, this version that wraps each call in an output function really > gives me a better idea why Fibonacci was fascinated with this phenomenon.) > > Now, as for memory consumption, consider that *each* one of those needless > calls to fib_d/1 in that module has to wait in memory, waiting for its > tree of dependent results. The rate at which the *function* explodes makes > the storage size of each value carried over the stack meaningless in terms > of memory optimization -- the number of calls explodes so fast that even > stack frames full of [reference (int)] can eat gigabytes before you can > blink. > > > - Regarding (b): You suggested to cache results. This would be exactly > > my approach, but as I stated before, I was worried, this is not the > > right thing to do in a functional language: I was hoping/expecting > > Erlang to might cache the results itself. My reasoning here is (more > > abstract in terms of functional languages general and not Erlang in > > specific) that functions are side effect free and two function calls > > with the same parameter always returns the same result. Therefore I > > sensed a possibility for optimisation Erlang doesn't seem to do. Do > > you know how other functional languages behave in this regard? > > It depends. This is called "memoization" and is a common technique, but > not many language runtimes memoize results for you without you explicitly > asking it to, or implementing it yourself (and any language I've used to > write actual production software requires you to implement the memoization > yourself, its too hard to guess what makes sense to keep around -- this > would require *actually* magical GC). > > You have to be careful in functional languages written for imperative > language runtimes and in multi-paradigm languages that can be used for FP > that didn't anticipate functional programming. Generally problematic > recursion in Scala and recursion depth limits in Python, for example, can > catch trip you up, and that's a simple problem. Lots of languages that are > just sorta-functional will manifest surprisingly weird processes from > innocent looking function definitions if you're not careful. Erlang > refreshingly straightforward and simple by comparison. > > The general attitude here toward bottlenecks is NOT to make them more > performant or sit around praying for compiler or runtime optimizations, > but to instead AVOID HAVING BOTTLENECKS to begin with. > > Also recognize that there are simply some places Erlang is not the proper > tool. Writing some number cruncher in a language tailored for that use > case and talking to it over a socket can be the super speed hack you are > looking for sometimes. > > > - Regarding the macro/functions constant flavours: This is a > > discussion you can have in many languages I guess and I don't feel to > > interested in it. What sparked my interest however are the benchmarks > > by Technion. Function performance might be "good enough", but I was > > suprised, that the Erlang compiler has a real difference in runtime. I > > was thinking, that my very simple constant functions would be the most > > trivial thing to optimize away for the compiler. I have the overall > > impression by now that the Erlang compiler optimisations are not very > > strong. Can you support/oppose that impression? Also regarding the > > same topic: I understand, that functions in guards are not allowed, > > because they might have side-effects. But I also understand, that the > > compiler (in theory) could check wether a function has sideeffects or > > not? > > This exact optimization exists as an option to the compiler. (This is > what Richard was referring to in his response.) But inlining like this is > NOT the default. The default is to build a non-magical Erlang assembly > of the program you wrote. Play with compiler options -P, -E, and -S to > see the process yourself -- you can learn a lot about the way things > work this way. The VM would be rather less accessible (to humans) if it > defaulted to doing lots of neat little tricks. > > I think this is a fair tradeoff. The version I wrote doesn't really benefit > from using macros over functions for your constants anyway -- because *most* > of the values wind up in function arguments after the first iteration. > > -Craig > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From trapexit@REDACTED Thu Dec 17 19:43:55 2015 From: trapexit@REDACTED (Antonio SJ Musumeci) Date: Thu, 17 Dec 2015 13:43:55 -0500 Subject: [erlang-questions] ets:safe_fixtable/2 & ets:tab2file/{2, 3} question In-Reply-To: References: <20151217142421.GB64758@fhebert-ltm2.internal.salesforce.com> Message-ID: What exactly is that you need behaviorally? You could also have a process which just continuously iterates over the table placing the records into a rotating `disk_log`. If you include a timestamp or have something to know precisely which version of the record you can replay the log and recover the state you want. If you need straight up snapshots then maybe a liberal select would give you a dump. I don't recall however at what level the select/match locks and if the table is large it'd be expensive memory wise. It's hard to beat a COW setup if you need snapshots. On Thu, Dec 17, 2015 at 11:38 AM, Felix Gallo wrote: > You can take advantage of erlang's concurrency to get > arbitrarily-close-to-redis semantics. > > For example, redis's bgsave could be achieved by writing as usual to your > ets table, but also sending a duplicate message to a gen_server whose job > it is to keep up to date a second, slave, ets table. That gen_server would > be the one to provide dumps (via to_dets or whatever other facility). Then > if it has to pause while it dumps, its message queue grows during the > duration but eventually flushes out and brings itself back up to date. > Meanwhile the primary ets replica continues to be usable. > > It's not a silver bullet because, like redis, you would still have to > worry about the pathological conditions, like dumps taking so long that the > slave gen_server's queue gets out of control, or out of memory conditions, > etc., etc. But if you feel like implementing paxos or waiting about 3 > months, you could also generalize the gen_server so that a group of them > formed a distributed cluster. > > F. > > > On Thu, Dec 17, 2015 at 8:16 AM, Benoit Chesneau > wrote: > >> >> >> On Thu, Dec 17, 2015 at 5:13 PM Benoit Chesneau >> wrote: >> >>> On Thu, Dec 17, 2015 at 3:24 PM Fred Hebert wrote: >>> >>>> On 12/17, Benoit Chesneau wrote: >>>> >But what happen when I use `ets:tab2file/2` while keys are continuously >>>> >added at the end? When does it stop? >>>> > >>>> >>>> I'm not sure what answer you expect to the question "how can I keep an >>>> infinitely growing table from taking an infinite amount of time to dump >>>> to disk" that doesn't require locking it to prevent the growth from >>>> showing up. >>>> >>> >>> well by keeping a version of the data at some point :) But that's not >>> how it works unfortunately. >>> >>> >>>> >>>> Do note that safe_fixtable/2 does *not* prevent new inserted elements >>>> from showing up in your table -- it only prevents objects from being >>>> taken out or being iterated over twice. While it's easier to create a >>>> pathological case with an ordered_set table (keeping adding +1 as keys >>>> near the end), it is not beyond the realm of possibility to do so with >>>> other table types (probably with lots of insertions and playing with >>>> process priorities, or predictable hash sequences). >>>> >>>> I don't believe there's any way to lock a public table (other than >>>> implicit blocking in match and select functions). If I were to give a >>>> wild guess, I'd say to look at ets:info(Tab,size), and have your >>>> table-dumping process stop when it reaches the predetermined size or >>>> meets an earlier exit. This would let you bound the time it takes you to >>>> dump the table, at the cost of possibly neglecting to add information >>>> (which you would do anyway -- you would just favor older info before >>>> newer info). This would however imply reimplementing your own tab2file >>>> functionality. >>>> >>>> >>> Good idea, i need to think a little more about it.. I wish it could be >>> possible to fork an ets table at some point and only use this snapshot in >>> memory like REDIS does literally by forking the process when dumping it. >>> That would be useful... >>> >>> Thanks for the answer! >>> >>> >> side note, but i am thinking that selecting keys per batch also limit the >> possible effects of the concurrent writes since it can work faster that >> way. though writing to the file is slow. >> >> - benoit >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From binarin@REDACTED Thu Dec 17 20:59:36 2015 From: binarin@REDACTED (Alexey Lebedeff) Date: Thu, 17 Dec 2015 22:59:36 +0300 Subject: [erlang-questions] Strange interaction between Docker and Erlangs ports (exit_status lost) In-Reply-To: References: <1A27991C-1789-405C-86BE-997E75C511EA@cabine.org> Message-ID: Hi, You're on a right track here. This behaviour is due to kernel reparenting of orphaned processes to pid 1 (init) in current pid namespace. For zombies it even sends SIGCHLD to this pid 1 process when reparenting. So I think it's possible to reproduce this even without docker, just with NIF that does fork(2). So the assumption about no extra SIGCHLD is wrong and needs to be fixed. Are you willing to do this? Or I could give it a try. Best, Alexey 17 ???. 2015 ?. 19:51 ???????????? "Lukas Larsson" ???????: > Hello, > > I did some digging into this and it appears that some extra process (I > don't know which one) sends a SIGCHLD to beam.smp which is caught and does > not match any pid that the spawn driver is interested in. The spawn driver > is built around the assumption that no extra SIGCHLD arrives so after > receiving the extra SIGCHLD it does not go looking for more as it thinks it > has gotten all it should and thus the exit_status of the ls command is > never received. I changed the spawn driver to no longer assume that it is > interested in each SIGCHLD but then it starts spinning like crazy over > waitpid so we really have to figure out what that extra process is in order > to do anything about it. > > For some reason strace in the docker images I built is very very broken. > If someone who is better at working with docker wants to pick it up and > have a look I've forked and added some things to Andr?'s repo: > https://github.com/garazdawi/docker-erlang-bug and the "fix" with trace > output in here: > https://github.com/garazdawi/otp/tree/lukas/erts/docker-rogue-process-fix-kinda > > The output I get is: > child sleep > Signal chld waiter > 23: About to execute exec inet_gethost 4 > child died 23 0 > child died 24 10 > 25: About to execute exec ls > 25: ready_input read 67 > child died 25 0 > 25: report_exit_status 0 -> 0x7f23ab9c0c2025: report_exit_status 0 -> > 0x7f23ab9c0c2025: ready_input read 0 > 25: port_inp_failure 0 > Dockerfile > README.md > erlang-OTP-18.2.tar.gz > test.beam > test.erl > > SUCCESS > > the questions is what is this child 24 that dies with status 10? It seems > to be sticking together with inet_gethost, but I don't understand why it > should generate extra SIGCHLDs. > > Lukas > > On Thu, Dec 17, 2015 at 12:47 PM, Andr? Cruz wrote: > >> On 17 Dec 2015, at 10:31, Alexey Lebedeff wrote: >> > >> > Ah, docker at its best ) >> > >> > $ for iter in $(seq 1 100); do echo -n "$iter " 1>&2 ; docker run --rm >> edevil/docker-erlang-bug bash -c "sleep 1; erl -noshell -s test run -s init >> stop" 2>/dev/null; done | sort | uniq -c >> > 100 SUCCESS >> > >> > but >> > >> > for iter in $(seq 1 100); do echo -n "$iter " 1>&2 ; docker run --rm >> edevil/docker-erlang-bug erl -noshell -s test run -s init stop 2>/dev/null; >> done | sort | uniq -c >> > 12 FAILED >> > 88 SUCCESS >> > >> > So you should either use bash/sleep trick or try find a bug in docker. >> Honestly, I just gave up ) Especially given that it's not very convinient >> to use erlang distribution inside docker containers without something like >> weavedns. >> >> There are some subtle changes that somehow mitigate the problem, for >> example: >> >> $ docker run edevil/docker-erlang-bug bash -c "erl -noshell -s test run >> -s init stop 1>&1" >> SUCCESS >> >> Notice the strange stdout redirect. Without it: >> >> $ docker run edevil/docker-erlang-bug bash -c "erl -noshell -s test run >> -s init stop" >> FAILED >> >> It seems to me that the Erlang port is not aware that the external >> command has completed. Can we be sure this is a Docker problem and not some >> incorrect assumption by the Beam VM about its environment? This recent >> e-mail >> http://erlang.org/pipermail/erlang-questions/2015-October/086590.html >> talks about launched processes being on another process session, can this >> be related? >> >> Andr? >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From garazdawi@REDACTED Thu Dec 17 21:16:51 2015 From: garazdawi@REDACTED (Lukas Larsson) Date: Thu, 17 Dec 2015 21:16:51 +0100 Subject: [erlang-questions] Strange interaction between Docker and Erlangs ports (exit_status lost) In-Reply-To: References: <1A27991C-1789-405C-86BE-997E75C511EA@cabine.org> Message-ID: On Thu, Dec 17, 2015 at 8:59 PM, Alexey Lebedeff wrote: > > So the assumption about no extra SIGCHLD is wrong and needs to be fixed. > Are you willing to do this? Or I could give it a try. > It may actually already be fixed in master. I just did a complete rewrite of the spawn driver to be released in 19.0 and if i remember correctly it should not have this problem. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparr0@REDACTED Thu Dec 17 19:45:13 2015 From: sparr0@REDACTED (Sparr) Date: Thu, 17 Dec 2015 10:45:13 -0800 Subject: [erlang-questions] Attempting to use erl_lint on a .erl source file Message-ID: I'm trying to use erl_lint() to build a simple Erlang syntax and style checker. I've gotten far enough to load the file and parse it into Forms and to get erl_lint to partially understand it, but then erl_lint complains about undefined functions that are defined. What am I doing wrong? erlint.erl : -module(erlint). -export([lint/1]). % based on http://stackoverflow.com/a/28086396/13675 lint(File) -> {ok, B} = file:read_file(File), Forms = scan(erl_scan:tokens([],binary_to_list(B),1),[]), F = fun(X) -> {ok,Y} = erl_parse:parse_form(X), Y end, erl_lint:module([F(X) || X <- Forms],File). scan({done,{ok,T,N},S},Res) -> scan(erl_scan:tokens([],S,N),[T|Res]); scan(_,Res) -> lists:reverse(Res). hello.erl : -module(hello). -export([hello_world/0]). hello_world() -> io:fwrite("hello, world\n"). attempt to use : 1> c(erlint). {ok,erlint} 2> erlint:lint("hello.erl"). {error,[{"hello.erl", [{2,erl_lint,{undefined_function,{hello_world,0}}}]}], []} PS: This is a cross post after posting on reddit and stackoverflow. -------------- next part -------------- An HTML attachment was scrubbed... URL: From garazdawi@REDACTED Thu Dec 17 22:33:56 2015 From: garazdawi@REDACTED (Lukas Larsson) Date: Thu, 17 Dec 2015 22:33:56 +0100 Subject: [erlang-questions] Strange interaction between Docker and Erlangs ports (exit_status lost) In-Reply-To: References: <1A27991C-1789-405C-86BE-997E75C511EA@cabine.org> Message-ID: On Thu, Dec 17, 2015 at 9:16 PM, Lukas Larsson wrote: > > On Thu, Dec 17, 2015 at 8:59 PM, Alexey Lebedeff > wrote: >> >> So the assumption about no extra SIGCHLD is wrong and needs to be fixed. >> Are you willing to do this? Or I could give it a try. >> > > It may actually already be fixed in master. I just did a complete rewrite > of the spawn driver to be released in 19.0 and if i remember correctly it > should not have this problem. > Looking a bit more on the internet i found this: https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/ which seems to describe exactly the problem that you are experiencing. The erlang vm is not built to run as the pid 1 process, even with my new changes in the master branch this will still be a problem as instead of missing out on exit_status messages you will have a system full of zombie processes. Lukas -------------- next part -------------- An HTML attachment was scrubbed... URL: From andre@REDACTED Fri Dec 18 00:00:37 2015 From: andre@REDACTED (=?utf-8?Q?Andr=C3=A9_Cruz?=) Date: Thu, 17 Dec 2015 23:00:37 +0000 Subject: [erlang-questions] Strange interaction between Docker and Erlangs ports (exit_status lost) In-Reply-To: References: <1A27991C-1789-405C-86BE-997E75C511EA@cabine.org> Message-ID: Sent from my iPad > On 17/12/2015, at 21:33, Lukas Larsson wrote: > Looking a bit more on the internet i found this: https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/ which seems to describe exactly the problem that you are experiencing. The erlang vm is not built to run as the pid 1 process, even with my new changes in the master branch this will still be a problem as instead of missing out on exit_status messages you will have a system full of zombie processes. I understand that orphaned processes will be left as zombies, but that is a more explicit problem. You list the processes and see them. The Beam VM being stuck forever because the message we know we should have received is lost is a much more confusing situation and it took a few people before we even realized what the problem was. There are several reports in the wild of strange lock ups when compiling Elixir code, for example, that I think are related to this. So, in conclusion, it would be nice if the VM simply discarded unknown SIGCHLDS that it happens to waitpid() upon without discarding the correct SIGCHLD and sending the expected exit_status message. Andr? -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Fri Dec 18 03:21:15 2015 From: zxq9@REDACTED (zxq9) Date: Fri, 18 Dec 2015 11:21:15 +0900 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program In-Reply-To: References: <4309580.Gyv4PvKBSB@changa> Message-ID: <1987272.B8r27d2JAS@changa> On 2015?12?17? ??? 19:34:02 you wrote: > Good illustration. Fibonacci to the rescue - for once... > I feel compelled to point out, however, that the talk about memory > explosion is not applicable here. In Fibonacci - and, I believe, in the > original program (though I didn't read it closely), the maximum stack > depth, and therefore the amount of stack memory required, is only linear in > n. (Well, with lists and such, it might be quadratic for the OP.) The > explosion is only timewise. > The accumulating, tail recursive version runs in constant space, of course. > Better, but linear would still do. > > /Erik You're right. The sequential, naive version is indeed linear in space. A senselessly "parallelized" version grows much larger. Hilariously so. %%% Stupid, "parallelized" definition. NEVER DO THIS. fib_para(N, P) -> ok = io:format("Calling fib_para(~tp, ~tp)~n", [N, P]), fib_p(N, P). fib_p(0, P) -> P ! {self(), 0}; fib_p(1, P) -> P ! {self(), 1}; fib_p(N, P) -> PA = spawn(?MODULE, fib_para, [N - 1, self()]), PB = spawn(?MODULE, fib_para, [N - 2, self()]), fib_p(P, {PA, none}, {PB, none}). fib_p(P, {PA, none}, B) -> receive {PA, A} -> fib_p(P, {PA, A}, B) end; fib_p(P, A, {PB, none}) -> receive {PB, B} -> fib_p(P, A, {PB, B}) end; fib_p(P, {_, A}, {_, B}) -> P ! {self(), A + B}. As TOTALLY INSANE as the above example is, this is exactly the kind of thing I see people actually do from time to time. Try a search for "parallel fibonacci" and you'll find countless tutorials in various languages that demonstrate the ridiculousness above. A *few* of them actually mention how stupid this is; many don't. I've run into production code (AHHH!) that winds up making the same exactly mistake as this, processizing something instead of actually parallelizing it. (For some reason this is especially prevalant in languages, environments or code where words like "promise" and "future" come up a lot. Always makes me want to say "You keep using that word. I do not think it means what you think it means." But I think nobody would get the reference. (>.<) https://www.youtube.com/watch?v=wujVMIYzYXg ) It is interesting to run each on a high enough value that you get an answer the same day and don't crash the VM, but see the problems fib_desc/1 and fib_para/2 have: 2> timer:tc(fibses, fib_desc, [20]). % ... % snip % ... Calling: fib_desc(3) Calling: fib_desc(2) Calling: fib_desc(1) Calling: fib_desc(0) Calling: fib_desc(1) Calling: fib_desc(2) Calling: fib_desc(1) Calling: fib_desc(0) {732419,6765} 3> timer:tc(fibses, fib_asc, [20]). Calling: fib_asc(20) Calling: fib_asc(0) Calling: fib_asc(1) Calling: fib_asc(20, 2, 0, 1) Calling: fib_asc(20, 3, 1, 1) Calling: fib_asc(20, 4, 1, 2) Calling: fib_asc(20, 5, 2, 3) Calling: fib_asc(20, 6, 3, 5) Calling: fib_asc(20, 7, 5, 8) Calling: fib_asc(20, 8, 8, 13) Calling: fib_asc(20, 9, 13, 21) Calling: fib_asc(20, 10, 21, 34) Calling: fib_asc(20, 11, 34, 55) Calling: fib_asc(20, 12, 55, 89) Calling: fib_asc(20, 13, 89, 144) Calling: fib_asc(20, 14, 144, 233) Calling: fib_asc(20, 15, 233, 377) Calling: fib_asc(20, 16, 377, 610) Calling: fib_asc(20, 17, 610, 987) Calling: fib_asc(20, 18, 987, 1597) Calling: fib_asc(20, 19, 1597, 2584) Calling: fib_asc(20, 20, 2584, 4181) {1322,6765} 3> timer:tc(fibses, fib_para, [20, self()]). % ... % snip % ... Calling fib_para(0, <0.21803.0>) Calling fib_para(1, <0.21817.0>) Calling fib_para(0, <0.21817.0>) Calling fib_para(1, <0.21905.0>) Calling fib_para(0, <0.21905.0>) {627543,{<0.33.0>,6765}} 4> flush(). Shell got {<0.33.0>,6765} ok 5> Wow. <0.21905.0> That escalated quickly. On second look, the times are pretty interesting. The stupidly parallel version performed slightly faster than the naive recursive version. Whoa... Checking a few more times, it is consistently a little bit faster. Both are hundreds of times slower than the iterative one, of course, but it is incredible to me that "spawn, spawn, call, send, send, receive, call, receive, call" is faster than "call, call, return, return". -Craig From vinoski@REDACTED Fri Dec 18 03:32:40 2015 From: vinoski@REDACTED (Steve Vinoski) Date: Thu, 17 Dec 2015 21:32:40 -0500 Subject: [erlang-questions] Attempting to use erl_lint on a .erl source file In-Reply-To: References: Message-ID: On Thu, Dec 17, 2015 at 1:45 PM, Sparr wrote: > I'm trying to use erl_lint() to build a simple Erlang syntax and style > checker. I've gotten far enough to load the file and parse it into Forms > and to get erl_lint to partially understand it, but then erl_lint complains > about undefined functions that are defined. What am I doing wrong? > > erlint.erl : > > -module(erlint). > -export([lint/1]). > > % based on http://stackoverflow.com/a/28086396/13675 > > lint(File) -> > {ok, B} = file:read_file(File), > Forms = scan(erl_scan:tokens([],binary_to_list(B),1),[]), > F = fun(X) -> {ok,Y} = erl_parse:parse_form(X), Y end, > erl_lint:module([F(X) || X <- Forms],File). > > scan({done,{ok,T,N},S},Res) -> > scan(erl_scan:tokens([],S,N),[T|Res]); > scan(_,Res) -> > lists:reverse(Res). > > hello.erl : > > -module(hello). > -export([hello_world/0]). > > hello_world() -> io:fwrite("hello, world\n"). > > attempt to use : > > 1> c(erlint). > {ok,erlint} > 2> erlint:lint("hello.erl"). > {error,[{"hello.erl", > [{2,erl_lint,{undefined_function,{hello_world,0}}}]}], > []} > I copied both erlint and hello directly out of your email, pasted them into source modules, compiled erlint, and ran erlint:lint("hello.erl"), same as you show. It returned {ok,[]}. I then changed the first double quote in hello.erl to a single quote to introduce an obvious syntax error, and retried. That gave me the same result you're seeing, which makes sense because the module is exporting a function that it never defines due to the syntax error. You might want to check your hello.erl source, or just try to compile it, to make sure its contents are correct. --steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinoski@REDACTED Fri Dec 18 04:26:08 2015 From: vinoski@REDACTED (Steve Vinoski) Date: Thu, 17 Dec 2015 22:26:08 -0500 Subject: [erlang-questions] Attempting to use erl_lint on a .erl source file In-Reply-To: References: Message-ID: On Thu, Dec 17, 2015 at 9:52 PM, Sparr wrote: > Figured it out with help from #erlang on freenode. If hello.erl is missing > the trailing newline then the last form gets missed. I'm going to try to > figure out how to account for that. > You don't need a newline specifically -- a space character will also work, for example. Just change this: Forms = scan(erl_scan:tokens([],binary_to_list(B),1),[]), to this: Forms = scan(erl_scan:tokens([],binary_to_list(B)++" ",1),[]), --steve > On Dec 17, 2015 18:32, "Steve Vinoski" wrote: > >> >> >> On Thu, Dec 17, 2015 at 1:45 PM, Sparr wrote: >> >>> I'm trying to use erl_lint() to build a simple Erlang syntax and style >>> checker. I've gotten far enough to load the file and parse it into Forms >>> and to get erl_lint to partially understand it, but then erl_lint complains >>> about undefined functions that are defined. What am I doing wrong? >>> >>> erlint.erl : >>> >>> -module(erlint). >>> -export([lint/1]). >>> >>> % based on http://stackoverflow.com/a/28086396/13675 >>> >>> lint(File) -> >>> {ok, B} = file:read_file(File), >>> Forms = scan(erl_scan:tokens([],binary_to_list(B),1),[]), >>> F = fun(X) -> {ok,Y} = erl_parse:parse_form(X), Y end, >>> erl_lint:module([F(X) || X <- Forms],File). >>> >>> scan({done,{ok,T,N},S},Res) -> >>> scan(erl_scan:tokens([],S,N),[T|Res]); >>> scan(_,Res) -> >>> lists:reverse(Res). >>> >>> hello.erl : >>> >>> -module(hello). >>> -export([hello_world/0]). >>> >>> hello_world() -> io:fwrite("hello, world\n"). >>> >>> attempt to use : >>> >>> 1> c(erlint). >>> {ok,erlint} >>> 2> erlint:lint("hello.erl"). >>> {error,[{"hello.erl", >>> [{2,erl_lint,{undefined_function,{hello_world,0}}}]}], >>> []} >>> >> >> I copied both erlint and hello directly out of your email, pasted them >> into source modules, compiled erlint, and ran erlint:lint("hello.erl"), >> same as you show. It returned {ok,[]}. I then changed the first double >> quote in hello.erl to a single quote to introduce an obvious syntax error, >> and retried. That gave me the same result you're seeing, which makes sense >> because the module is exporting a function that it never defines due to the >> syntax error. You might want to check your hello.erl source, or just try to >> compile it, to make sure its contents are correct. >> >> --steve >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparr0@REDACTED Fri Dec 18 03:52:30 2015 From: sparr0@REDACTED (Sparr) Date: Thu, 17 Dec 2015 18:52:30 -0800 Subject: [erlang-questions] Attempting to use erl_lint on a .erl source file In-Reply-To: References: Message-ID: Figured it out with help from #erlang on freenode. If hello.erl is missing the trailing newline then the last form gets missed. I'm going to try to figure out how to account for that. On Dec 17, 2015 18:32, "Steve Vinoski" wrote: > > > On Thu, Dec 17, 2015 at 1:45 PM, Sparr wrote: > >> I'm trying to use erl_lint() to build a simple Erlang syntax and style >> checker. I've gotten far enough to load the file and parse it into Forms >> and to get erl_lint to partially understand it, but then erl_lint complains >> about undefined functions that are defined. What am I doing wrong? >> >> erlint.erl : >> >> -module(erlint). >> -export([lint/1]). >> >> % based on http://stackoverflow.com/a/28086396/13675 >> >> lint(File) -> >> {ok, B} = file:read_file(File), >> Forms = scan(erl_scan:tokens([],binary_to_list(B),1),[]), >> F = fun(X) -> {ok,Y} = erl_parse:parse_form(X), Y end, >> erl_lint:module([F(X) || X <- Forms],File). >> >> scan({done,{ok,T,N},S},Res) -> >> scan(erl_scan:tokens([],S,N),[T|Res]); >> scan(_,Res) -> >> lists:reverse(Res). >> >> hello.erl : >> >> -module(hello). >> -export([hello_world/0]). >> >> hello_world() -> io:fwrite("hello, world\n"). >> >> attempt to use : >> >> 1> c(erlint). >> {ok,erlint} >> 2> erlint:lint("hello.erl"). >> {error,[{"hello.erl", >> [{2,erl_lint,{undefined_function,{hello_world,0}}}]}], >> []} >> > > I copied both erlint and hello directly out of your email, pasted them > into source modules, compiled erlint, and ran erlint:lint("hello.erl"), > same as you show. It returned {ok,[]}. I then changed the first double > quote in hello.erl to a single quote to introduce an obvious syntax error, > and retried. That gave me the same result you're seeing, which makes sense > because the module is exporting a function that it never defines due to the > syntax error. You might want to check your hello.erl source, or just try to > compile it, to make sure its contents are correct. > > --steve > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparr0@REDACTED Fri Dec 18 04:33:51 2015 From: sparr0@REDACTED (Sparr) Date: Thu, 17 Dec 2015 19:33:51 -0800 Subject: [erlang-questions] Attempting to use erl_lint on a .erl source file In-Reply-To: References: Message-ID: I want to lint the file as-is, not modify it to make my parsing work. The preprocessor/compiler accepts it without a trailing newline, so should I. On Thu, Dec 17, 2015 at 7:26 PM, Steve Vinoski wrote: > > > On Thu, Dec 17, 2015 at 9:52 PM, Sparr wrote: > >> Figured it out with help from #erlang on freenode. If hello.erl is >> missing the trailing newline then the last form gets missed. I'm going to >> try to figure out how to account for that. >> > > You don't need a newline specifically -- a space character will also work, > for example. Just change this: > > Forms = scan(erl_scan:tokens([],binary_to_list(B),1),[]), > > to this: > > Forms = scan(erl_scan:tokens([],binary_to_list(B)++" ",1),[]), > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladdu55@REDACTED Fri Dec 18 09:05:21 2015 From: vladdu55@REDACTED (Vlad Dumitrescu) Date: Fri, 18 Dec 2015 09:05:21 +0100 Subject: [erlang-questions] Attempting to use erl_lint on a .erl source file In-Reply-To: References: Message-ID: I believe (but haven't checked) that if you feed your code through the preprocessor (which you should do because it may contain macros), then epp will handle the "dot at eof" problem for you. best regards, Vlad On Fri, Dec 18, 2015 at 4:33 AM, Sparr wrote: > I want to lint the file as-is, not modify it to make my parsing work. The > preprocessor/compiler accepts it without a trailing newline, so should I. > > On Thu, Dec 17, 2015 at 7:26 PM, Steve Vinoski wrote: > >> >> >> On Thu, Dec 17, 2015 at 9:52 PM, Sparr wrote: >> >>> Figured it out with help from #erlang on freenode. If hello.erl is >>> missing the trailing newline then the last form gets missed. I'm going to >>> try to figure out how to account for that. >>> >> >> You don't need a newline specifically -- a space character will also >> work, for example. Just change this: >> >> Forms = scan(erl_scan:tokens([],binary_to_list(B),1),[]), >> >> to this: >> >> Forms = scan(erl_scan:tokens([],binary_to_list(B)++" ",1),[]), >> > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikpelinux@REDACTED Fri Dec 18 09:15:30 2015 From: mikpelinux@REDACTED (Mikael Pettersson) Date: Fri, 18 Dec 2015 09:15:30 +0100 Subject: [erlang-questions] Strange interaction between Docker and Erlangs ports (exit_status lost) In-Reply-To: References: <1A27991C-1789-405C-86BE-997E75C511EA@cabine.org> Message-ID: <22131.49314.640034.9795@gargle.gargle.HOWL> Lukas Larsson writes: > On Thu, Dec 17, 2015 at 9:16 PM, Lukas Larsson wrote: > > > > > On Thu, Dec 17, 2015 at 8:59 PM, Alexey Lebedeff > > wrote: > >> > >> So the assumption about no extra SIGCHLD is wrong and needs to be fixed. > >> Are you willing to do this? Or I could give it a try. > >> > > > > It may actually already be fixed in master. I just did a complete rewrite > > of the spawn driver to be released in 19.0 and if i remember correctly it > > should not have this problem. > > > > Looking a bit more on the internet i found this: > https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/ > which seems to describe exactly the problem that you are experiencing. The > erlang vm is not built to run as the pid 1 process, even with my new > changes in the master branch this will still be a problem as instead of > missing out on exit_status messages you will have a system full of zombie > processes. To my old Unix eye it seems quite broken to suddenly run random stuff as PID 1 with the unexpected SIGCHLDs that entails. So to me it looks like a nasty bug in docker itself. Sure, the Erlang VM could try to work around it, but frankly it shouldn't have to. From henrik.x.nord@REDACTED Fri Dec 18 12:26:29 2015 From: henrik.x.nord@REDACTED (Henrik Nord X) Date: Fri, 18 Dec 2015 12:26:29 +0100 Subject: [erlang-questions] Patch package OTP 18.2.1 released Message-ID: <5673ED65.7070900@ericsson.com> Patch Package: OTP 18.2.1 Git Tag: OTP-18.2.1 Date: 2015-12-18 Trouble Report Id: OTP-13202, OTP-13204 Seq num: System: OTP Release: 18 Application: erts-7.2.1 Predecessor: OTP 18.2 Check out the git tag OTP-18.2.1, and build a full OTP system including documentation. Apply one or more applications from this build as patches to your installation using the 'otp_patch_apply' tool. For information on install requirements, see descriptions for each application version below. --------------------------------------------------------------------- --- erts-7.2.1 ------------------------------------------------------ --------------------------------------------------------------------- The erts-7.2.1 application can be applied independently of other applications on a full OTP 18 installation. --- Fixed Bugs and Malfunctions --- OTP-13202 Application(s): erts Revert "Fix erroneous splitting of emulator path" OTP-13204 Application(s): erts Related Id(s): pr926 Fix HiPE enabled emulator for FreeBSD. Full runtime dependencies of erts-7.2.1: kernel-4.0, sasl-2.4, stdlib-2.5 --------------------------------------------------------------------- --- Thanks to ------------------------------------------------------- --------------------------------------------------------------------- Kenji Rikitake --------------------------------------------------------------------- --------------------------------------------------------------------- --------------------------------------------------------------------- From henrik.x.nord@REDACTED Fri Dec 18 12:41:02 2015 From: henrik.x.nord@REDACTED (Henrik Nord X) Date: Fri, 18 Dec 2015 12:41:02 +0100 Subject: [erlang-questions] Patch package OTP 18.2.1 released In-Reply-To: <5673ED65.7070900@ericsson.com> References: <5673ED65.7070900@ericsson.com> Message-ID: <5673F0CE.8050809@ericsson.com> This is specifically to be able to run the following programs with a PATH containing a space, on windows. * ct_run * dialyzer * erlc * escript * typer On 12/18/2015 12:26 PM, Henrik Nord X wrote: > Patch Package: OTP 18.2.1 > Git Tag: OTP-18.2.1 > Date: 2015-12-18 > Trouble Report Id: OTP-13202, OTP-13204 > Seq num: > System: OTP > Release: 18 > Application: erts-7.2.1 > Predecessor: OTP 18.2 > > Check out the git tag OTP-18.2.1, and build a full OTP system > including documentation. Apply one or more applications from this > build as patches to your installation using the 'otp_patch_apply' > tool. For information on install requirements, see descriptions for > each application version below. > > --------------------------------------------------------------------- > --- erts-7.2.1 ------------------------------------------------------ > --------------------------------------------------------------------- > > The erts-7.2.1 application can be applied independently of other > applications on a full OTP 18 installation. > > --- Fixed Bugs and Malfunctions --- > > OTP-13202 Application(s): erts > > Revert "Fix erroneous splitting of emulator path" > > > OTP-13204 Application(s): erts > Related Id(s): pr926 > > Fix HiPE enabled emulator for FreeBSD. > > > Full runtime dependencies of erts-7.2.1: kernel-4.0, sasl-2.4, > stdlib-2.5 > > > --------------------------------------------------------------------- > --- Thanks to ------------------------------------------------------- > --------------------------------------------------------------------- > > Kenji Rikitake > > > --------------------------------------------------------------------- > --------------------------------------------------------------------- > --------------------------------------------------------------------- > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmartin4242@REDACTED Fri Dec 18 21:35:04 2015 From: mmartin4242@REDACTED (Michael Martin) Date: Fri, 18 Dec 2015 14:35:04 -0600 Subject: [erlang-questions] Jenkins Message-ID: <56746DF8.60502@gmail.com> Hi all, I'm trying to set up a Jenkins CI server on which to build my project. I'm brand new to Jenkins (as an administrator - I've been around Jenkins and Hudson before it for some years), and am having some difficulty with github. Hopefully someone here has been through this before and has an answer. I'm building the project with emake. When I kick off a build (script that runs "make clean && make"), my repository is cloned just fine, but the dependencies don't - github kicks me off every time. Here's the job log: Building in workspace /var/lib/jenkins/jobs/Nebula/workspace > git rev-parse --is-inside-work-tree # timeout=10 Fetching changes from the remote Git repository > git config remote.origin.urlhttp://github.com/building39/nebula2.git # timeout=10 Fetching upstream changes fromhttp://github.com/building39/nebula2.git > git --version # timeout=10 using GIT_SSH to set credentials using .gitcredentials to set credentials > git config --local credential.username building39 # timeout=10 > git config --local credential.helper store --file=/tmp/git8309391135389932287.credentials # timeout=10 > git -c core.askpass=true fetch --tags --progresshttp://github.com/building39/nebula2.git +refs/heads/*:refs/remotes/origin/* > git config --local --remove-section credential # timeout=10 > git rev-parse refs/remotes/origin/develop^{commit} # timeout=10 > git rev-parse refs/remotes/origin/origin/develop^{commit} # timeout=10 Checking out Revision f03e718ab443c4065cd7e93c74e5b04ae274329e (refs/remotes/origin/develop) > git config core.sparsecheckout # timeout=10 > git checkout -f f03e718ab443c4065cd7e93c74e5b04ae274329e > git rev-list f03e718ab443c4065cd7e93c74e5b04ae274329e # timeout=10 [workspace] $ /bin/sh -xe /tmp/hudson3671066196778207128.sh + make clean GEN clean-app + make Cloning into '/var/lib/jenkins/jobs/Nebula/workspace/deps/lager'... Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. /bin/sh: 1: cd: can't cd to /var/lib/jenkins/jobs/Nebula/workspace/deps/lager make: *** [/var/lib/jenkins/jobs/Nebula/workspace/deps/lager] Error 2 Build step 'Execute shell' marked build as failure Finished: FAILURE Any ideas how to fix the Jenkins job configuration so that it can successfully clone the dependencies? Thanks, Michael From felixgallo@REDACTED Fri Dec 18 21:38:32 2015 From: felixgallo@REDACTED (Felix Gallo) Date: Fri, 18 Dec 2015 12:38:32 -0800 Subject: [erlang-questions] Jenkins In-Reply-To: <56746DF8.60502@gmail.com> References: <56746DF8.60502@gmail.com> Message-ID: Do you have the lager dependency set to point to the canonical lager distribution, or perhaps a fork that your jenkins instance's github credentials don't have access to? F. On Fri, Dec 18, 2015 at 12:35 PM, Michael Martin wrote: > Hi all, > > I'm trying to set up a Jenkins CI server on which to build my project. I'm > brand new to Jenkins (as an administrator - > I've been around Jenkins and Hudson before it for some years), and am > having some difficulty with github. Hopefully > someone here has been through this before and has an answer. > > I'm building the project with emake. When I kick off a build (script that > runs "make clean && make"), my repository > is cloned just fine, but the dependencies don't - github kicks me off > every time. Here's the job log: > > Building in workspace /var/lib/jenkins/jobs/Nebula/workspace > > git rev-parse --is-inside-work-tree # timeout=10 > Fetching changes from the remote Git repository > > git config remote.origin.urlhttp://github.com/building39/nebula2.git > # timeout=10 > Fetching upstream changes fromhttp://github.com/building39/nebula2.git > > git --version # timeout=10 > using GIT_SSH to set credentials > using .gitcredentials to set credentials > > git config --local credential.username building39 # timeout=10 > > git config --local credential.helper store > --file=/tmp/git8309391135389932287.credentials # timeout=10 > > git -c core.askpass=true fetch --tags --progresshttp:// > github.com/building39/nebula2.git +refs/heads/*:refs/remotes/origin/* > > git config --local --remove-section credential # timeout=10 > > git rev-parse refs/remotes/origin/develop^{commit} # timeout=10 > > git rev-parse refs/remotes/origin/origin/develop^{commit} # timeout=10 > Checking out Revision f03e718ab443c4065cd7e93c74e5b04ae274329e > (refs/remotes/origin/develop) > > git config core.sparsecheckout # timeout=10 > > git checkout -f f03e718ab443c4065cd7e93c74e5b04ae274329e > > git rev-list f03e718ab443c4065cd7e93c74e5b04ae274329e # timeout=10 > [workspace] $ /bin/sh -xe /tmp/hudson3671066196778207128.sh > + make clean > GEN clean-app > + make > Cloning into '/var/lib/jenkins/jobs/Nebula/workspace/deps/lager'... > Permission denied (publickey). > fatal: Could not read from remote repository. > > Please make sure you have the correct access rights > and the repository exists. > /bin/sh: 1: cd: can't cd to > /var/lib/jenkins/jobs/Nebula/workspace/deps/lager > make: *** [/var/lib/jenkins/jobs/Nebula/workspace/deps/lager] Error 2 > Build step 'Execute shell' marked build as failure > Finished: FAILURE > > Any ideas how to fix the Jenkins job configuration so that it can > successfully clone the dependencies? > > Thanks, > Michael > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmartin4242@REDACTED Fri Dec 18 21:41:24 2015 From: mmartin4242@REDACTED (Michael Martin) Date: Fri, 18 Dec 2015 14:41:24 -0600 Subject: [erlang-questions] Jenkins In-Reply-To: References: <56746DF8.60502@gmail.com> Message-ID: <56746F74.2050601@gmail.com> Hi Felix, I'm trying to get lager from basho's repository. My Makefile: PROJECT = nebula2 DEPS = lager meck crc16 jsx uuid riakc cowboy pooler mcd dep_lager = git git@REDACTED:basho/lager.git 2.1.1 dep_meck = git git@REDACTED:eproxus/meck.git 0.8.2 dep_crc16 = git git@REDACTED:building39/crc16_nif.git 1.1 dep_jsx = git git@REDACTED:talentdeficit/jsx.git master dep_uuid = git git://github.com/avtobiff/erlang-uuid.git v0.4.7 dep_riakc = git git@REDACTED:basho/riak-erlang-client.git master dep_cowboy = git git@REDACTED:ninenines/cowboy.git 1.0.3 dep_pooler = git git@REDACTED:seth/pooler.git master dep_mcd = git git@REDACTED:building39/mcd.git master include erlang.mk # Turn on lager ERLC_COMPILE_OPTS= +'{parse_transform, lager_transform}' ERLC_OPTS += $(ERLC_COMPILE_OPTS) TEST_ERLC_OPTS += $(ERLC_COMPILE_OPTS) On 12/18/2015 02:38 PM, Felix Gallo wrote: > Do you have the lager dependency set to point to the canonical lager > distribution, or perhaps a fork that your jenkins instance's github > credentials don't have access to? > > F. > > On Fri, Dec 18, 2015 at 12:35 PM, Michael Martin > > wrote: > > Hi all, > > I'm trying to set up a Jenkins CI server on which to build my > project. I'm brand new to Jenkins (as an administrator - > I've been around Jenkins and Hudson before it for some years), and > am having some difficulty with github. Hopefully > someone here has been through this before and has an answer. > > I'm building the project with emake. When I kick off a build > (script that runs "make clean && make"), my repository > is cloned just fine, but the dependencies don't - github kicks me > off every time. Here's the job log: > > Building in workspace /var/lib/jenkins/jobs/Nebula/workspace > > git rev-parse --is-inside-work-tree # timeout=10 > Fetching changes from the remote Git repository > > git config > remote.origin.urlhttp://github.com/building39/nebula2.git > # timeout=10 > Fetching upstream changes > fromhttp://github.com/building39/nebula2.git > > > git --version # timeout=10 > using GIT_SSH to set credentials > using .gitcredentials to set credentials > > git config --local credential.username building39 # timeout=10 > > git config --local credential.helper store > --file=/tmp/git8309391135389932287.credentials # timeout=10 > > git -c core.askpass=true fetch --tags > --progresshttp://github.com/building39/nebula2.git > > +refs/heads/*:refs/remotes/origin/* > > git config --local --remove-section credential # timeout=10 > > git rev-parse refs/remotes/origin/develop^{commit} # timeout=10 > > git rev-parse refs/remotes/origin/origin/develop^{commit} # > timeout=10 > Checking out Revision f03e718ab443c4065cd7e93c74e5b04ae274329e > (refs/remotes/origin/develop) > > git config core.sparsecheckout # timeout=10 > > git checkout -f f03e718ab443c4065cd7e93c74e5b04ae274329e > > git rev-list f03e718ab443c4065cd7e93c74e5b04ae274329e # timeout=10 > [workspace] $ /bin/sh -xe /tmp/hudson3671066196778207128.sh > + make clean > GEN clean-app > + make > Cloning into '/var/lib/jenkins/jobs/Nebula/workspace/deps/lager'... > Permission denied (publickey). > fatal: Could not read from remote repository. > > Please make sure you have the correct access rights > and the repository exists. > /bin/sh: 1: cd: can't cd to > /var/lib/jenkins/jobs/Nebula/workspace/deps/lager > make: *** [/var/lib/jenkins/jobs/Nebula/workspace/deps/lager] Error 2 > Build step 'Execute shell' marked build as failure > Finished: FAILURE > > Any ideas how to fix the Jenkins job configuration so that it can > successfully clone the dependencies? > > Thanks, > Michael > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From felixgallo@REDACTED Fri Dec 18 21:49:55 2015 From: felixgallo@REDACTED (Felix Gallo) Date: Fri, 18 Dec 2015 12:49:55 -0800 Subject: [erlang-questions] Jenkins In-Reply-To: <56746F74.2050601@gmail.com> References: <56746DF8.60502@gmail.com> <56746F74.2050601@gmail.com> Message-ID: I created an empty directory, copied in your Makefile and a fairly recent erlang.mk, and typed make, and it all worked out, so it has to be some kind of environmental or versioning issue on your build machine. Is your git maybe configured to force a particular key? You might check out https://help.github.com/articles/error-permission-denied-publickey/ and run through those debug steps on the build server. F. On Fri, Dec 18, 2015 at 12:41 PM, Michael Martin wrote: > Hi Felix, > > I'm trying to get lager from basho's repository. > My Makefile: > > PROJECT = nebula2 > > DEPS = lager meck crc16 jsx uuid riakc cowboy pooler mcd > dep_lager = git git@REDACTED:basho/lager.git 2.1.1 > dep_meck = git git@REDACTED:eproxus/meck.git 0.8.2 > dep_crc16 = git git@REDACTED:building39/crc16_nif.git 1.1 > dep_jsx = git git@REDACTED:talentdeficit/jsx.git master > dep_uuid = git git://github.com/avtobiff/erlang-uuid.git v0.4.7 > dep_riakc = git git@REDACTED:basho/riak-erlang-client.git master > dep_cowboy = git git@REDACTED:ninenines/cowboy.git 1.0.3 > dep_pooler = git git@REDACTED:seth/pooler.git master > dep_mcd = git git@REDACTED:building39/mcd.git master > include erlang.mk > > # Turn on lager > ERLC_COMPILE_OPTS= +'{parse_transform, lager_transform}' > > ERLC_OPTS += $(ERLC_COMPILE_OPTS) > TEST_ERLC_OPTS += $(ERLC_COMPILE_OPTS) > > > > > On 12/18/2015 02:38 PM, Felix Gallo wrote: > > Do you have the lager dependency set to point to the canonical lager > distribution, or perhaps a fork that your jenkins instance's github > credentials don't have access to? > > F. > > On Fri, Dec 18, 2015 at 12:35 PM, Michael Martin > wrote: > >> Hi all, >> >> I'm trying to set up a Jenkins CI server on which to build my project. >> I'm brand new to Jenkins (as an administrator - >> I've been around Jenkins and Hudson before it for some years), and am >> having some difficulty with github. Hopefully >> someone here has been through this before and has an answer. >> >> I'm building the project with emake. When I kick off a build (script that >> runs "make clean && make"), my repository >> is cloned just fine, but the dependencies don't - github kicks me off >> every time. Here's the job log: >> >> Building in workspace /var/lib/jenkins/jobs/Nebula/workspace >> > git rev-parse --is-inside-work-tree # timeout=10 >> Fetching changes from the remote Git repository >> > git config remote.origin.urlhttp://github.com/building39/nebula2.git >> # timeout=10 >> Fetching upstream changes fromhttp://github.com/building39/nebula2.git >> > git --version # timeout=10 >> using GIT_SSH to set credentials >> using .gitcredentials to set credentials >> > git config --local credential.username building39 # timeout=10 >> > git config --local credential.helper store >> --file=/tmp/git8309391135389932287.credentials # timeout=10 >> > git -c core.askpass=true fetch --tags --progresshttp:// >> github.com/building39/nebula2.git +refs/heads/*:refs/remotes/origin/* >> > git config --local --remove-section credential # timeout=10 >> > git rev-parse refs/remotes/origin/develop^{commit} # timeout=10 >> > git rev-parse refs/remotes/origin/origin/develop^{commit} # timeout=10 >> Checking out Revision f03e718ab443c4065cd7e93c74e5b04ae274329e >> (refs/remotes/origin/develop) >> > git config core.sparsecheckout # timeout=10 >> > git checkout -f f03e718ab443c4065cd7e93c74e5b04ae274329e >> > git rev-list f03e718ab443c4065cd7e93c74e5b04ae274329e # timeout=10 >> [workspace] $ /bin/sh -xe /tmp/hudson3671066196778207128.sh >> + make clean >> GEN clean-app >> + make >> Cloning into '/var/lib/jenkins/jobs/Nebula/workspace/deps/lager'... >> Permission denied (publickey). >> fatal: Could not read from remote repository. >> >> Please make sure you have the correct access rights >> and the repository exists. >> /bin/sh: 1: cd: can't cd to >> /var/lib/jenkins/jobs/Nebula/workspace/deps/lager >> make: *** [/var/lib/jenkins/jobs/Nebula/workspace/deps/lager] Error 2 >> Build step 'Execute shell' marked build as failure >> Finished: FAILURE >> >> Any ideas how to fix the Jenkins job configuration so that it can >> successfully clone the dependencies? >> >> Thanks, >> Michael >> >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam@REDACTED Fri Dec 18 21:58:54 2015 From: adam@REDACTED (Adam Cammack) Date: Fri, 18 Dec 2015 14:58:54 -0600 Subject: [erlang-questions] Jenkins In-Reply-To: <56746F74.2050601@gmail.com> References: <56746DF8.60502@gmail.com> <56746F74.2050601@gmail.com> Message-ID: <20151218205854.GA15830@serenity> Your Makefile is telling git to use the git+ssh protocol, but Jenkins has already removed the credentials from the git config. I am not sure if this can be avoided with your git plugin by manually setting the credentials for the user on your server, but if those are all public repositories you can use the https://github.com// form instead. -- Adam Cammack On Fri, Dec 18, 2015 at 02:41:24PM -0600, Michael Martin wrote: > Hi Felix, > > I'm trying to get lager from basho's repository. > My Makefile: > > PROJECT = nebula2 > > DEPS = lager meck crc16 jsx uuid riakc cowboy pooler mcd > dep_lager = git git@REDACTED:basho/lager.git 2.1.1 > dep_meck = git git@REDACTED:eproxus/meck.git 0.8.2 > dep_crc16 = git git@REDACTED:building39/crc16_nif.git 1.1 > dep_jsx = git git@REDACTED:talentdeficit/jsx.git master > dep_uuid = git git://github.com/avtobiff/erlang-uuid.git v0.4.7 > dep_riakc = git git@REDACTED:basho/riak-erlang-client.git master > dep_cowboy = git git@REDACTED:ninenines/cowboy.git 1.0.3 > dep_pooler = git git@REDACTED:seth/pooler.git master > dep_mcd = git git@REDACTED:building39/mcd.git master > include erlang.mk > > # Turn on lager > ERLC_COMPILE_OPTS= +'{parse_transform, lager_transform}' > > ERLC_OPTS += $(ERLC_COMPILE_OPTS) > TEST_ERLC_OPTS += $(ERLC_COMPILE_OPTS) > > > > On 12/18/2015 02:38 PM, Felix Gallo wrote: > >Do you have the lager dependency set to point to the canonical lager > >distribution, or perhaps a fork that your jenkins instance's github > >credentials don't have access to? > > > >F. > > > >On Fri, Dec 18, 2015 at 12:35 PM, Michael Martin >> wrote: > > > > Hi all, > > > > I'm trying to set up a Jenkins CI server on which to build my > > project. I'm brand new to Jenkins (as an administrator - > > I've been around Jenkins and Hudson before it for some years), and > > am having some difficulty with github. Hopefully > > someone here has been through this before and has an answer. > > > > I'm building the project with emake. When I kick off a build > > (script that runs "make clean && make"), my repository > > is cloned just fine, but the dependencies don't - github kicks me > > off every time. Here's the job log: > > > > Building in workspace /var/lib/jenkins/jobs/Nebula/workspace > > > git rev-parse --is-inside-work-tree # timeout=10 > > Fetching changes from the remote Git repository > > > git config > > remote.origin.urlhttp://github.com/building39/nebula2.git > > # timeout=10 > > Fetching upstream changes > > fromhttp://github.com/building39/nebula2.git > > > > > git --version # timeout=10 > > using GIT_SSH to set credentials > > using .gitcredentials to set credentials > > > git config --local credential.username building39 # timeout=10 > > > git config --local credential.helper store > > --file=/tmp/git8309391135389932287.credentials # timeout=10 > > > git -c core.askpass=true fetch --tags > > --progresshttp://github.com/building39/nebula2.git > > > > +refs/heads/*:refs/remotes/origin/* > > > git config --local --remove-section credential # timeout=10 > > > git rev-parse refs/remotes/origin/develop^{commit} # timeout=10 > > > git rev-parse refs/remotes/origin/origin/develop^{commit} # > > timeout=10 > > Checking out Revision f03e718ab443c4065cd7e93c74e5b04ae274329e > > (refs/remotes/origin/develop) > > > git config core.sparsecheckout # timeout=10 > > > git checkout -f f03e718ab443c4065cd7e93c74e5b04ae274329e > > > git rev-list f03e718ab443c4065cd7e93c74e5b04ae274329e # timeout=10 > > [workspace] $ /bin/sh -xe /tmp/hudson3671066196778207128.sh > > + make clean > > GEN clean-app > > + make > > Cloning into '/var/lib/jenkins/jobs/Nebula/workspace/deps/lager'... > > Permission denied (publickey). > > fatal: Could not read from remote repository. > > > > Please make sure you have the correct access rights > > and the repository exists. > > /bin/sh: 1: cd: can't cd to > > /var/lib/jenkins/jobs/Nebula/workspace/deps/lager > > make: *** [/var/lib/jenkins/jobs/Nebula/workspace/deps/lager] Error 2 > > Build step 'Execute shell' marked build as failure > > Finished: FAILURE > > > > Any ideas how to fix the Jenkins job configuration so that it can > > successfully clone the dependencies? > > > > Thanks, > > Michael > > > > > > > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions > > > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From mmartin4242@REDACTED Fri Dec 18 22:20:07 2015 From: mmartin4242@REDACTED (Michael Martin) Date: Fri, 18 Dec 2015 15:20:07 -0600 Subject: [erlang-questions] Jenkins In-Reply-To: <20151218205854.GA15830@serenity> References: <56746DF8.60502@gmail.com> <56746F74.2050601@gmail.com> <20151218205854.GA15830@serenity> Message-ID: <56747887.1090803@gmail.com> Bingo! Using the https:// form did the trick. Thanks, Adam! On 12/18/2015 02:58 PM, Adam Cammack wrote: > Your Makefile is telling git to use the git+ssh protocol, but Jenkins > has already removed the credentials from the git config. I am not sure > if this can be avoided with your git plugin by manually setting the > credentials for the user on your server, but if those are all public > repositories you can use the https://github.com// form > instead. > > -- > Adam Cammack > > On Fri, Dec 18, 2015 at 02:41:24PM -0600, Michael Martin wrote: >> Hi Felix, >> >> I'm trying to get lager from basho's repository. >> My Makefile: >> >> PROJECT = nebula2 >> >> DEPS = lager meck crc16 jsx uuid riakc cowboy pooler mcd >> dep_lager = git git@REDACTED:basho/lager.git 2.1.1 >> dep_meck = git git@REDACTED:eproxus/meck.git 0.8.2 >> dep_crc16 = git git@REDACTED:building39/crc16_nif.git 1.1 >> dep_jsx = git git@REDACTED:talentdeficit/jsx.git master >> dep_uuid = git git://github.com/avtobiff/erlang-uuid.git v0.4.7 >> dep_riakc = git git@REDACTED:basho/riak-erlang-client.git master >> dep_cowboy = git git@REDACTED:ninenines/cowboy.git 1.0.3 >> dep_pooler = git git@REDACTED:seth/pooler.git master >> dep_mcd = git git@REDACTED:building39/mcd.git master >> include erlang.mk >> >> # Turn on lager >> ERLC_COMPILE_OPTS= +'{parse_transform, lager_transform}' >> >> ERLC_OPTS += $(ERLC_COMPILE_OPTS) >> TEST_ERLC_OPTS += $(ERLC_COMPILE_OPTS) >> >> >> >> On 12/18/2015 02:38 PM, Felix Gallo wrote: >>> Do you have the lager dependency set to point to the canonical lager >>> distribution, or perhaps a fork that your jenkins instance's github >>> credentials don't have access to? >>> >>> F. >>> >>> On Fri, Dec 18, 2015 at 12:35 PM, Michael Martin >> > wrote: >>> >>> Hi all, >>> >>> I'm trying to set up a Jenkins CI server on which to build my >>> project. I'm brand new to Jenkins (as an administrator - >>> I've been around Jenkins and Hudson before it for some years), and >>> am having some difficulty with github. Hopefully >>> someone here has been through this before and has an answer. >>> >>> I'm building the project with emake. When I kick off a build >>> (script that runs "make clean && make"), my repository >>> is cloned just fine, but the dependencies don't - github kicks me >>> off every time. Here's the job log: >>> >>> Building in workspace /var/lib/jenkins/jobs/Nebula/workspace >>> > git rev-parse --is-inside-work-tree # timeout=10 >>> Fetching changes from the remote Git repository >>> > git config >>> remote.origin.urlhttp://github.com/building39/nebula2.git >>> # timeout=10 >>> Fetching upstream changes >>> fromhttp://github.com/building39/nebula2.git >>> >>> > git --version # timeout=10 >>> using GIT_SSH to set credentials >>> using .gitcredentials to set credentials >>> > git config --local credential.username building39 # timeout=10 >>> > git config --local credential.helper store >>> --file=/tmp/git8309391135389932287.credentials # timeout=10 >>> > git -c core.askpass=true fetch --tags >>> --progresshttp://github.com/building39/nebula2.git >>> >>> +refs/heads/*:refs/remotes/origin/* >>> > git config --local --remove-section credential # timeout=10 >>> > git rev-parse refs/remotes/origin/develop^{commit} # timeout=10 >>> > git rev-parse refs/remotes/origin/origin/develop^{commit} # >>> timeout=10 >>> Checking out Revision f03e718ab443c4065cd7e93c74e5b04ae274329e >>> (refs/remotes/origin/develop) >>> > git config core.sparsecheckout # timeout=10 >>> > git checkout -f f03e718ab443c4065cd7e93c74e5b04ae274329e >>> > git rev-list f03e718ab443c4065cd7e93c74e5b04ae274329e # timeout=10 >>> [workspace] $ /bin/sh -xe /tmp/hudson3671066196778207128.sh >>> + make clean >>> GEN clean-app >>> + make >>> Cloning into '/var/lib/jenkins/jobs/Nebula/workspace/deps/lager'... >>> Permission denied (publickey). >>> fatal: Could not read from remote repository. >>> >>> Please make sure you have the correct access rights >>> and the repository exists. >>> /bin/sh: 1: cd: can't cd to >>> /var/lib/jenkins/jobs/Nebula/workspace/deps/lager >>> make: *** [/var/lib/jenkins/jobs/Nebula/workspace/deps/lager] Error 2 >>> Build step 'Execute shell' marked build as failure >>> Finished: FAILURE >>> >>> Any ideas how to fix the Jenkins job configuration so that it can >>> successfully clone the dependencies? >>> >>> Thanks, >>> Michael >>> >>> >>> >>> _______________________________________________ >>> erlang-questions mailing list >>> erlang-questions@REDACTED >>> http://erlang.org/mailman/listinfo/erlang-questions >>> >>> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions From mangalaman93@REDACTED Sat Dec 19 01:42:26 2015 From: mangalaman93@REDACTED (aman mangal) Date: Sat, 19 Dec 2015 00:42:26 +0000 Subject: [erlang-questions] State Management Problem Message-ID: Hi everyone, I have been reading a few blogs on Erlang lately and some of them strongly points out that Erlang solves the reliability problem very nicely for distributed systems. But when I really think about it, Erlang solves only half of the reliability problem. It creates duplicate actors, handle their crash by linking and supervision but it does not handle the distributed state management problem at all. If I go back and look at the thesis of Joe Armstrong, it also talks about everything as an actor model. I am wondering what assumptions were made about state management at the time of creation of the language as well as what are good ways to handle the other half of the reliability problem when it comes to Erlang? I understand that this is a hard problem to solve but at the same time, it seems to be a generic problem for Distributed Systems. Does/can Erlang provide any generic solutions? Aman -------------- next part -------------- An HTML attachment was scrubbed... URL: From jusfeel@REDACTED Sat Dec 19 05:27:12 2015 From: jusfeel@REDACTED (Hao) Date: Sat, 19 Dec 2015 12:27:12 +0800 (CST) Subject: [erlang-questions] Didn't get it to work. The erlcount app from chapter 20 the count of application Message-ID: <5fac34d6.7ef0.151b87dd459.Coremail.jusfeel@163.com> Hi I tried the erlcount example. when I downloaded the source code from website http://learnyousomeerlang.com/ Then when I run, I had a bad_return, any idea why? ----------- [jusfeel@REDACTED learn-you-some-erlang]$ erl -env ERL_LIBS "." Erlang R16B02-basho5 (erts-5.10.3) [source] [64-bit] [smp:2:2] [async-threads:10] [hipe] [kernel-poll:false] Eshell V5.10.3 (abort with ^G) 1> application:load(ppool). ok 2> application:start(ppool). {error,{bad_return,{{ppool,start,[normal,[]]}, {'EXIT',{undef,[{ppool,start,[normal,[]],[]}, {application_master,start_it_old,4, [{file,"application_master.erl"},{line,269}]}]}}}}} 3> =INFO REPORT==== 19-Dec-2015::12:18:27 === application: ppool exited: {bad_return, {{ppool,start,[normal,[]]}, {'EXIT', {undef, [{ppool,start,[normal,[]],[]}, {application_master,start_it_old,4, [{file,"application_master.erl"}, {line,269}]}]}}}} type: temporary 3> -- ? ? Hao WANG 86 186 0086 9737 ???? ???? http://blog.jusfeel.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Sat Dec 19 06:53:41 2015 From: zxq9@REDACTED (zxq9) Date: Sat, 19 Dec 2015 14:53:41 +0900 Subject: [erlang-questions] State Management Problem In-Reply-To: References: Message-ID: <4888947.eaRJizcy0y@changa> On 2015?12?19? ??? 00:42:26 aman mangal wrote: > Hi everyone, > > I have been reading a few blogs on Erlang lately and some of them strongly > points out that Erlang solves the reliability problem very nicely for > distributed systems. But when I really think about it, Erlang solves only > half of the reliability problem. It creates duplicate actors, handle their > crash by linking and supervision but it does not handle the distributed > state management problem at all. If I go back and look at the thesis of Joe > Armstrong, it also talks about everything as an actor model. I am wondering > what assumptions were made about state management at the time of creation > of the language as well as what are good ways to handle the other half of > the reliability problem when it comes to Erlang? I understand that this is > a hard problem to solve but at the same time, it seems to be a generic > problem for Distributed Systems. Does/can Erlang provide any generic > solutions? Short answer: No. tl;dr: Three fundamental problems exist: consistency, availability, partition tolerance. Pick two. The Rules forbid solving all three at once. Discussion: The problems of distributed data are threefold, and only two can be solved at a time unless you happen to know how to either freeze time, open a wormhole or beat the speed of light. This is why there are no generic solutions to distributed data, only solutions that make tradeoffs of various types, and different tradeoffs are best suited to specific situations -- hence the impossibility of genericizing any solution. The basic problem is described in the CAP theorem. It says a system can have: - Consistency - Availability - Partition tolerance but that you can only have 2 at once. That doesn't mean that all parts of your system have to make the same tradeoff with regard to state management, but again, the fact that a tradeoff must be made is indication that there can never be a truly generic solution to this. What Erlang lets you do is decide *for sure* whether something is running or crashing, instead of handling random faults in ad hoc ways. Tolerance for distributed failures is *also* something Erlang leaves up to the programmer to figure out, because the same CAP problem that exists in distributed state management also applied to the system's view of the state of its own operational capacity. (Does every node know what the state of every other node? That's data, too!) So this is a hard problem. In the real world *most* systems seem to be designed to start involving humans once partitions occur (though most have the ability to run in a degraded state of service until a sysop fixes things). In the imaginary world where there is a software package to cure every ill, all our theories are correct, software is bug-free and network latency is zero this is handled automatically by correct implementations of logically flawless leader election algorithms that always work and a second partition never occurs in the middle of partition resolution. But we don't live in that world. Partition tolerance is a hard problem, maybe the hardest to code around, so most systems seem to make a tradeoff that sacrifices (some level of) partition tolerance in exchange for (general, but maybe deferred) consistency and (an absolutely insane focus on) availability. -Craig From kennethlakin@REDACTED Sat Dec 19 07:41:41 2015 From: kennethlakin@REDACTED (Kenneth Lakin) Date: Fri, 18 Dec 2015 22:41:41 -0800 Subject: [erlang-questions] Didn't get it to work. The erlcount app from chapter 20 the count of application In-Reply-To: <5fac34d6.7ef0.151b87dd459.Coremail.jusfeel@163.com> References: <5fac34d6.7ef0.151b87dd459.Coremail.jusfeel@163.com> Message-ID: <5674FC25.6050101@gmail.com> On 12/18/2015 08:27 PM, Hao wrote: > Hi > > I tried the erlcount example. when I downloaded the source code from > website http://learnyousomeerlang.com/ > Then when I run, I had a bad_return, any idea why? > > ----------- > [jusfeel@REDACTED learn-you-some-erlang]$ erl -env ERL_LIBS "." It looks like erlang doesn't know where to find the .beam files for the ppool application. You can tell Erlang to search additional directories for .beam files with the -pa command line option. Try this: $ cd learn-you-some-erlang/ppool-1.0 $ erlang -pa ebin/ Then do application:start(ppool). in the Erlang shell. If that gives the same error, do make:all(). then try application:start(ppool). again. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From jusfeel@REDACTED Sat Dec 19 07:57:44 2015 From: jusfeel@REDACTED (Hao) Date: Sat, 19 Dec 2015 14:57:44 +0800 (CST) Subject: [erlang-questions] Didn't get it to work. The erlcount app from chapter 20 the count of application In-Reply-To: <5674FC25.6050101@gmail.com> References: <5fac34d6.7ef0.151b87dd459.Coremail.jusfeel@163.com> <5674FC25.6050101@gmail.com> Message-ID: <16157fe.a13d.151b907a3bf.Coremail.jusfeel@163.com> Thank you! I made a mistake in Emakefile - outdir is written as "ourdir". So the beam are exported to the root directory for that module when I do "erl -make". -- hao At 2015-12-19 14:41:41, "Kenneth Lakin" wrote: >On 12/18/2015 08:27 PM, Hao wrote: >> Hi >> >> I tried the erlcount example. when I downloaded the source code from >> website http://learnyousomeerlang.com/ >> Then when I run, I had a bad_return, any idea why? >> >> ----------- >> [jusfeel@REDACTED learn-you-some-erlang]$ erl -env ERL_LIBS "." > >It looks like erlang doesn't know where to find the .beam files for the >ppool application. You can tell Erlang to search additional directories >for .beam files with the -pa command line option. > >Try this: > >$ cd learn-you-some-erlang/ppool-1.0 >$ erlang -pa ebin/ > >Then do > >application:start(ppool). > >in the Erlang shell. > >If that gives the same error, do > >make:all(). > >then try application:start(ppool). again. > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mangalaman93@REDACTED Sat Dec 19 08:29:48 2015 From: mangalaman93@REDACTED (aman mangal) Date: Sat, 19 Dec 2015 07:29:48 +0000 Subject: [erlang-questions] State Management Problem In-Reply-To: <4888947.eaRJizcy0y@changa> References: <4888947.eaRJizcy0y@changa> Message-ID: I think, that makes sense. We have to make trade-offs to make a system practical. Though, the idea that I was pointing to is that, just like FLP proof states that there is no way to know if a process has really failed or the messages are just delayed but Erlang provides a practical and working way to deal with it. Similarly, can a language or standard libraries like OTP give us practical ways to achieve trade-offs among Consistency, Availability and Partition Tolerance? I feel that the same problem (the CAP trade-off) is being solved in each system separately. As far as I understand, Joe Armstrong, in his thesis, argues that a language can provide constructs to achieve reliability and that's how Erlang came into picture. I wonder whether CAP trade-offs can also be exposed using some standard set of libraries/language. - Aman On Sat, Dec 19, 2015 at 12:54 AM zxq9 wrote: > On 2015?12?19? ??? 00:42:26 aman mangal wrote: > > Hi everyone, > > > > I have been reading a few blogs on Erlang lately and some of them > strongly > > points out that Erlang solves the reliability problem very nicely for > > distributed systems. But when I really think about it, Erlang solves only > > half of the reliability problem. It creates duplicate actors, handle > their > > crash by linking and supervision but it does not handle the distributed > > state management problem at all. If I go back and look at the thesis of > Joe > > Armstrong, it also talks about everything as an actor model. I am > wondering > > what assumptions were made about state management at the time of creation > > of the language as well as what are good ways to handle the other half of > > the reliability problem when it comes to Erlang? I understand that this > is > > a hard problem to solve but at the same time, it seems to be a generic > > problem for Distributed Systems. Does/can Erlang provide any generic > > solutions? > > Short answer: > > No. > > > tl;dr: > > Three fundamental problems exist: consistency, availability, partition > tolerance. Pick two. The Rules forbid solving all three at once. > > > Discussion: > > The problems of distributed data are threefold, and only two can be solved > at a time unless you happen to know how to either freeze time, open a > wormhole or beat the speed of light. This is why there are no generic > solutions to distributed data, only solutions that make tradeoffs of > various types, and different tradeoffs are best suited to specific > situations -- hence the impossibility of genericizing any solution. > > The basic problem is described in the CAP theorem. It says a system can > have: > - Consistency > - Availability > - Partition tolerance > > but that you can only have 2 at once. > > That doesn't mean that all parts of your system have to make the same > tradeoff with regard to state management, but again, the fact that a > tradeoff must be made is indication that there can never be a truly generic > solution to this. > > What Erlang lets you do is decide *for sure* whether something is running > or crashing, instead of handling random faults in ad hoc ways. Tolerance > for distributed failures is *also* something Erlang leaves up to the > programmer to figure out, because the same CAP problem that exists in > distributed state management also applied to the system's view of the state > of its own operational capacity. (Does every node know what the state of > every other node? That's data, too!) > > So this is a hard problem. In the real world *most* systems seem to be > designed to start involving humans once partitions occur (though most have > the ability to run in a degraded state of service until a sysop fixes > things). In the imaginary world where there is a software package to cure > every ill, all our theories are correct, software is bug-free and network > latency is zero this is handled automatically by correct implementations of > logically flawless leader election algorithms that always work and a second > partition never occurs in the middle of partition resolution. But we don't > live in that world. > > Partition tolerance is a hard problem, maybe the hardest to code around, > so most systems seem to make a tradeoff that sacrifices (some level of) > partition tolerance in exchange for (general, but maybe deferred) > consistency and (an absolutely insane focus on) availability. > > -Craig > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Sat Dec 19 10:35:52 2015 From: zxq9@REDACTED (zxq9) Date: Sat, 19 Dec 2015 18:35:52 +0900 Subject: [erlang-questions] State Management Problem In-Reply-To: References: <4888947.eaRJizcy0y@changa> Message-ID: <1556449.EQOxXpbHRe@changa> On 2015?12?19? ??? 07:29:48 you wrote: > I think, that makes sense. We have to make trade-offs to make a system > practical. > > Though, the idea that I was pointing to is that, just like FLP proof > states that > there is no way to know if a process has really failed or the messages are > just delayed but Erlang provides a practical and working way to deal with > it. Similarly, can a language or standard libraries like OTP give us > practical ways to achieve trade-offs among Consistency, Availability and > Partition Tolerance? I feel that the same problem (the CAP trade-off) is > being solved in each system separately. > > As far as I understand, Joe Armstrong, in his thesis, argues that a > language can provide constructs to achieve reliability and that's how > Erlang came into picture. I wonder whether CAP trade-offs can also be > exposed using some standard set of libraries/language. Keep two (and a half things) in mind. 1. Joe was addressing a specific type of fault tolerance. There are many other kinds than transient bugs. This approach does nothing to magically fix hardware failures, for example. 2. The original idea does not appear to have been focused on massively distributed applications, but rather on making concurrent processes be the logical unit of computation instead of memory-shared objects. (Hence "concurrency oriented programming" instead of "object oriented programming".) 2.5 This was done indepenently of "the actor model". Today that's a nice buzzword, and Erlang is very nearly the actor model, but that wasn't the point. This was achieved by forcing a large number of concurrency tradeoffs onto the programmer from the start, tradeoffs that are normally left up to the programmer: how to queue messages; what "message sequence" means (and doesn't mean); whether there is a global "tic" all processes can refer to; if shared memory, message queues, and member function calls are all a big mixed bag of signaling possibilities or not; how semaphore/lock centric code should be; whether messaging was fundamentally asynchronous or synchronous; how underlying resources are scheduled; etc. The tradeoffs that were made do not suit every use case -- but it turns out that most of the tradeoffs suit a surprisingly broad variety of (most?) programming needs. TBH, a lot of the magical good that is in Erlang seems to have been incidental. (Actually, most of the good things that are in the world at all seem to have been incidental.) Point 2.5 is interesting to me personally because I had very nearly written a crappier version of OTP myself in (mostly) Python based around system processes and sockets to support a large project before realizing I could instead simplify things dramatically by just using Erlang. Joe's idea and Carl Hewitt's "Actor Model" were developed independently of one another but take a *very* similar approach -- one in the interest of real-world practicality, the other in the interest of developing an academically pure model of computation. This indicates to me that the basic idea of process-based computation lies very close to an underlying fundamental truth about information constructs. That I felt compelled to architect a system in a very similar manner after running into several dead-ends with regard to both concurrency-imposed complexity *and* fault tolerance (in ignorance of both OTP and the Actor Model) only cements this idea in my mind for firmly. But none of this addresses distributed state. There are a million aspects here that have to be calibrated to a particular project before you can even have a meaningful idea what "distributed state" means in any practical sense. This is because most of the state in most distributed systems don't matter to the *entire* system at any given moment. For example, in a game server do we care what the "global state" is? Ever? Of course not. We only care that enough related state is consistent enough that gameplay meets expectations. Here we consciously choose to accept some forms of inconsistency in return for lower latency (higher availability). If one node crashes we'll have to restart it but it shouldn't affect the rest of the system. Let me back up and make this example more concrete, because games today often represent an abnormally complete microcosm (and many of them are designed poorly, but there is a lot to learn from various missteps in games). We have mobs, players, items, chat services, financial services, a user identity service, a related online forum, a purchasing/information website, a web interface for game ranking and character stats, the map, and a concept of "live zones". Mobs are basically non-essential in that they will respawn on a timer if they get killed or crash but their respawn state is always known. Player characters are non-essential, but in a different way: their restart *state* must be transactionally guaranteed -- they can't just lose all their items or score or levels or whatever when they die or if a managing process crashes. Item trade *must* be given transactional guarantees, otherwise duplicates can occur in trade or if two players pick up an item "at the same time", duplicate references can occur, or items might disappear from the game entirely (a worse outcome of the same bug -- parts of the game may break in fundamental ways). Chat is global, but totally asynch. Its also non-critical. If clients see messages chat works. If they don't its down, or nobody is chatting. Gameplay is utterly unaffected. Finance requires firm transactional guarantees (and a whole universe-worth of political comedy). User identities are really *account* identities, and so storage requires transactional guarantees, but login status requires a separate approach (stateless, authentication/verification based or whatever). Online forums are forums. This is a solved problem, right? Sure, maybe. But in what ways has it been "solved"? This is itself a whole separate world of approaches, backends, frontends, authentication bridges (because this must tie into the existing user identity system), etc. Web interface for stats, scores and ranks appears similar to the forums issue at first, but the data it will draw from and display is *completely* game related. At what point should this update? How closely should changes in the game be reflected here? Is the overhead worth making it "live"? Are players the ones who check their own pages, or do others? Etc. A whole world of totally different tradeoffs apply here -- and they may be critical to the community model that drives the core game business. (ouch!) The base map data itself is usually static and globally known -- to the point that every game client has a copy of it; awesome for performance but means map updates are *game client* updates. Anything special that is occurring in a map zone at a given moment is a function of map data overlay local to a zone (typically) -- and that can change on the fly, but only temporarily while that zone's host node is alive. The map is divided into zones, and each zone is tied to its host node. If a node goes down that "region" is inaccessible for the moment (and everything in it will crash and need to respawn elsewhere), but but isolates faults by region which makes reasoning about it for players and developers fairly easy. ...And so on, and so on, et cetera, et cetera... Think through how distributed state works in each of these cases. There are tons of cases, and each one requires different tradeoffs. How would you go about writing a *generic* library that would provide a useful abstraction of *all* of those cases (and all the ones I left out -- game servers are complicated!). Much of this is not language-specific, or even framework specific. Because programming at that level is all about what the data is doing while it is alive. In fact, I argue (and *do* argue this, passionately at times) that while the basic infrastructure of the system is usually best handled in Erlang, Erlang is *not* the best tool for *every* part of the system. This is also true of data storage. Most of the issues involving distributed state in a system cross the barrier between hardware and software, and are trickier to solve in generic ways. That's why hardcore data people still talk in terms of spindles and read VS write latency instead of just generally talking about iops (unless they are stuck in a cloud service, in which case their game service is nearly guaranteed to fail for lack of the remotest clue what their SLA means in terms of actually trying to play the game). That's why when you buy EMC or NetApp services you have to configure the system to provide the sort of performance tradeoffs that fit your system instead of just buying them like they were refrigerators. And this is just the hardware side of things. Trying to write a generic system to handle distributed state in software would require knowing quite a bit about the hardware or else choices would be made totally in the blind. The result is that OTP handles state in terms of "safe state to recover and restart", and that's about it. The rest is up to you. Outside of Erlang we have database systems that have chosen various tradeoffs already, and we can pick from them where necessary. Some systems take the "transactions or bust, latency be damned" approach, other databases make the "conflicts will happen, or they won't and messages will be received, or they won't, but who cares, because PERFORMANCE!!!" approach, etc. Its not just that nobody can pick the perfect CAP tradeoff for all cases. Even more fundamentally, there is no single database system that is a perfect solution for all types of data. Sure, you can emulate any type of data in a full-service RDBMS, but sometimes its better to have your canonical, normalized storage in Postgres but be asking questions to a graphing database that is optimized for graph queries, or making text search queries to a system optimized for that, or "schemaless" document databases that don't really provide guarantees of any sort (so you check yourself in other code) but are super fast and easy to understand for some specific case (like youtube discussion comments that aren't of critical importance to humanity). This difference is what underlies thinking in data warehousing and archive retrieval (when people actually think, that is). If we can't even pick a "generic database" then its impossible to imagine that any language runtime could come batteries included with a "generic distributed state management framework". That would be a real trick indeed. If you happen to figure out how to swing that, give me a call -- I'll help you implement it and we'll be super famous, if not fabulously wealthy. Just to give yourself some brain food -- try describing what the API or interface to such a system would even look like. If that's hard to even imagine then the problem is usually more fundamental to the problem domain. -Craig From eriksoe@REDACTED Sat Dec 19 15:54:22 2015 From: eriksoe@REDACTED (=?UTF-8?Q?Erik_S=C3=B8e_S=C3=B8rensen?=) Date: Sat, 19 Dec 2015 15:54:22 +0100 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program In-Reply-To: <1987272.B8r27d2JAS@changa> References: <4309580.Gyv4PvKBSB@changa> <1987272.B8r27d2JAS@changa> Message-ID: Look again :-) Where does the time go? Times/results for asc, desc, and para versions: - With printing: {{4,6765},{488211,6765},{491135,{<0.44.0>,6765}}} I.e. para only a little slower than desc. I don't doubt that para appears a bit faster in your environment. - Without printing: {{3,6765},{1233,6765},{71173,{<0.44.0>,6765}}} That's a clearer picture: The stupidly parallel version is plenty slower than the na?vely recursive version. /Erik 2015-12-18 3:21 GMT+01:00 zxq9 : > On 2015?12?17? ??? 19:34:02 you wrote: > > Good illustration. Fibonacci to the rescue - for once... > > I feel compelled to point out, however, that the talk about memory > > explosion is not applicable here. In Fibonacci - and, I believe, in the > > original program (though I didn't read it closely), the maximum stack > > depth, and therefore the amount of stack memory required, is only linear > in > > n. (Well, with lists and such, it might be quadratic for the OP.) The > > explosion is only timewise. > > The accumulating, tail recursive version runs in constant space, of > course. > > Better, but linear would still do. > > > > /Erik > > You're right. > > The sequential, naive version is indeed linear in space. A senselessly > "parallelized" version grows much larger. Hilariously so. > > > %%% Stupid, "parallelized" definition. NEVER DO THIS. > fib_para(N, P) -> > ok = io:format("Calling fib_para(~tp, ~tp)~n", [N, P]), > fib_p(N, P). > > fib_p(0, P) -> P ! {self(), 0}; > fib_p(1, P) -> P ! {self(), 1}; > fib_p(N, P) -> > PA = spawn(?MODULE, fib_para, [N - 1, self()]), > PB = spawn(?MODULE, fib_para, [N - 2, self()]), > fib_p(P, {PA, none}, {PB, none}). > > fib_p(P, {PA, none}, B) -> receive {PA, A} -> fib_p(P, {PA, A}, B) end; > fib_p(P, A, {PB, none}) -> receive {PB, B} -> fib_p(P, A, {PB, B}) end; > fib_p(P, {_, A}, {_, B}) -> P ! {self(), A + B}. > > > As TOTALLY INSANE as the above example is, this is exactly the kind of > thing > I see people actually do from time to time. Try a search for "parallel > fibonacci" and you'll find countless tutorials in various languages that > demonstrate the ridiculousness above. A *few* of them actually mention how > stupid this is; many don't. I've run into production code (AHHH!) that > winds > up making the same exactly mistake as this, processizing something instead > of actually parallelizing it. > > (For some reason this is especially prevalant in languages, environments or > code where words like "promise" and "future" come up a lot. Always makes me > want to say "You keep using that word. I do not think it means what you > think it means." But I think nobody would get the reference. (>.<) > https://www.youtube.com/watch?v=wujVMIYzYXg ) > > It is interesting to run each on a high enough value that you get an > answer the same day and don't crash the VM, but see the problems fib_desc/1 > and fib_para/2 have: > > 2> timer:tc(fibses, fib_desc, [20]). > % ... > % snip > % ... > Calling: fib_desc(3) > Calling: fib_desc(2) > Calling: fib_desc(1) > Calling: fib_desc(0) > Calling: fib_desc(1) > Calling: fib_desc(2) > Calling: fib_desc(1) > Calling: fib_desc(0) > {732419,6765} > 3> timer:tc(fibses, fib_asc, [20]). > Calling: fib_asc(20) > Calling: fib_asc(0) > Calling: fib_asc(1) > Calling: fib_asc(20, 2, 0, 1) > Calling: fib_asc(20, 3, 1, 1) > Calling: fib_asc(20, 4, 1, 2) > Calling: fib_asc(20, 5, 2, 3) > Calling: fib_asc(20, 6, 3, 5) > Calling: fib_asc(20, 7, 5, 8) > Calling: fib_asc(20, 8, 8, 13) > Calling: fib_asc(20, 9, 13, 21) > Calling: fib_asc(20, 10, 21, 34) > Calling: fib_asc(20, 11, 34, 55) > Calling: fib_asc(20, 12, 55, 89) > Calling: fib_asc(20, 13, 89, 144) > Calling: fib_asc(20, 14, 144, 233) > Calling: fib_asc(20, 15, 233, 377) > Calling: fib_asc(20, 16, 377, 610) > Calling: fib_asc(20, 17, 610, 987) > Calling: fib_asc(20, 18, 987, 1597) > Calling: fib_asc(20, 19, 1597, 2584) > Calling: fib_asc(20, 20, 2584, 4181) > {1322,6765} > 3> timer:tc(fibses, fib_para, [20, self()]). > % ... > % snip > % ... > Calling fib_para(0, <0.21803.0>) > Calling fib_para(1, <0.21817.0>) > Calling fib_para(0, <0.21817.0>) > Calling fib_para(1, <0.21905.0>) > Calling fib_para(0, <0.21905.0>) > {627543,{<0.33.0>,6765}} > 4> flush(). > Shell got {<0.33.0>,6765} > ok > 5> > > > Wow. <0.21905.0> That escalated quickly. > > On second look, the times are pretty interesting. The stupidly parallel > version performed slightly faster than the naive recursive version. Whoa... > Checking a few more times, it is consistently a little bit faster. Both are > hundreds of times slower than the iterative one, of course, but it is > incredible to me that "spawn, spawn, call, send, send, receive, call, > receive, > call" is faster than "call, call, return, return". > > -Craig > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Sun Dec 20 04:45:49 2015 From: zxq9@REDACTED (zxq9) Date: Sun, 20 Dec 2015 12:45:49 +0900 Subject: [erlang-questions] Feedback for my first non-trivial Erlang program In-Reply-To: References: <1987272.B8r27d2JAS@changa> Message-ID: <5605801.CrfQ1gpXeN@changa> On 2015?12?19? ??? 15:54:22 you wrote: > Look again :-) Where does the time go? > > Times/results for asc, desc, and para versions: > - With printing: {{4,6765},{488211,6765},{491135,{<0.44.0>,6765}}} > I.e. para only a little slower than desc. I don't doubt that para appears > a bit faster in your environment. > - Without printing: {{3,6765},{1233,6765},{71173,{<0.44.0>,6765}}} > > That's a clearer picture: The stupidly parallel version is plenty slower > than the na?vely recursive version. Haha! How silly of me to miss *that*! How embarrassing. Embarrassing as it is, though, its an excellent and obvious example of side-effects completely distorting an external measurement. -Craig From mark@REDACTED Sun Dec 20 05:05:11 2015 From: mark@REDACTED (Mark Steele) Date: Sat, 19 Dec 2015 23:05:11 -0500 Subject: [erlang-questions] Streaming a folder from one node to another Message-ID: Hi all, Really quick question. What do you folks recommend if you need to stream a folder (that might be arbitrarily big) from one node to another in a cluster (from within Erlang) Regards, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Sun Dec 20 05:12:55 2015 From: zxq9@REDACTED (zxq9) Date: Sun, 20 Dec 2015 13:12:55 +0900 Subject: [erlang-questions] Streaming a folder from one node to another In-Reply-To: References: Message-ID: <1553210.s4dz6Syv2O@changa> On 2015?12?19? ??? 23:05:11 Mark Steele wrote: > Hi all, > > Really quick question. What do you folks recommend if you need to stream a > folder (that might be arbitrarily big) from one node to another in a > cluster (from within Erlang) A separate connection (do *not* do this within the disterl connection). Beyond that, whatever feels appropriate. Do you need to be able to restart in the middle if there is a failure, is a partial failure OK, is restarting the entire transfer OK instead, etc? Should this be throttled in some way based on an external input? There are a lot of little details that might come into play. If you just read/write file->socket->file in a naive way it would work fine, but file transfer requirements are usually a tiny bit more involved than that. -Craig From pierrefenoll@REDACTED Sun Dec 20 10:54:05 2015 From: pierrefenoll@REDACTED (Pierre Fenoll) Date: Sun, 20 Dec 2015 10:54:05 +0100 Subject: [erlang-questions] Streaming a folder from one node to another In-Reply-To: <1553210.s4dz6Syv2O@changa> References: <1553210.s4dz6Syv2O@changa> Message-ID: <910AE11E-FBFC-460B-BB1E-A5C35A25DECA@gmail.com> It depends what you want to do, but have you tried piping the folder to tar then to netcat? > On 20 Dec 2015, at 05:12, zxq9 wrote: > >> On 2015?12?19? ??? 23:05:11 Mark Steele wrote: >> Hi all, >> >> Really quick question. What do you folks recommend if you need to stream a >> folder (that might be arbitrarily big) from one node to another in a >> cluster (from within Erlang) > > A separate connection (do *not* do this within the disterl connection). > > Beyond that, whatever feels appropriate. Do you need to be able to restart in > the middle if there is a failure, is a partial failure OK, is restarting the > entire transfer OK instead, etc? Should this be throttled in some way based > on an external input? > > There are a lot of little details that might come into play. If you just > read/write file->socket->file in a naive way it would work fine, but file > transfer requirements are usually a tiny bit more involved than that. > > -Craig > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From unix1@REDACTED Sun Dec 20 14:00:31 2015 From: unix1@REDACTED (Unix One) Date: Sun, 20 Dec 2015 13:00:31 +0000 Subject: [erlang-questions] Behavior callbacks in edoc output Message-ID: Hello, Is it possible to have edoc generate documentation for behavior callbacks? I found the following: http://erlang.org/pipermail/erlang-patches/2012-July/002912.html http://erlang.org/pipermail/erlang-questions/2012-August/068722.html which reference planned fix, but edoc still generates a mostly blank behavior output with only the top level module description. Adding a @doc block before the -callback results in the edoc error about it not being allowed in module footer. It seems like an important piece of documentation to omit from output. Am I missing something obvious? Thank you! From sperber@REDACTED Sun Dec 20 16:19:44 2015 From: sperber@REDACTED (Michael Sperber) Date: Sun, 20 Dec 2015 16:19:44 +0100 Subject: [erlang-questions] Call for Participation: BOB 2016 (February 19, Berlin) Message-ID: ================================================================ BOB 2016 Conference "What happens if we simply use what's best?" February 19, 2016 Berlin http://bobkonf.de/2016/ Program: http://bobkonf.de/2016/program.html Registration: http://bobkonf.de/2016/registration.html ================================================================ BOB is the conference for developers, architects and decision-makers to explore technologies beyond the mainstream in software development, and to find the best tools available to software developers today. Our goal is for all participants of BOB to return home with new insights that enable them to improve their own software development experiences. The program features 14 talks and 8 tutorials on current topics: http://bobkonf.de/2016/program.html The subject range of talks includes functional programming, advanced front-end development, data management, and sophisticated uses of types. The tutorials feature introductions to Erlang, Haskell, Scala, Isabelle, Purescript, Idris, Akka HTTP, and Specification by Example. Elise Huard will hold the keynote talk - about Languages We Love. Registration is open online: http://bobkonf.de/2016/registration.html NOTE: The early-bird rates expire on January 17, 2016! BOB cooperates with the :clojured conference on the following day. There is a registration discount available for participants of both events. http://www.clojured.de/ From siraaj@REDACTED Sun Dec 20 18:21:41 2015 From: siraaj@REDACTED (Siraaj Khandkar) Date: Sun, 20 Dec 2015 12:21:41 -0500 Subject: [erlang-questions] lager changelog? In-Reply-To: <1452939788.1838505.1450211408955.JavaMail.yahoo@mail.yahoo.com> References: <1452939788.1838505.1450211408955.JavaMail.yahoo@mail.yahoo.com> Message-ID: <5676E3A5.3040901@khandkar.net> Thank you, Mark! On 12/15/15 3:30 PM, Mark Allen wrote: > There is no official changelog but as far as breaking API changes from > 2.x to 3.x the big one is around how traces work (or don't) with > multiple sinks. > > My expectation is that using lager 3 on a project that previously used > 2.x should Just Work. If it doesn't please open an issue on the repo - > because that's definitely something I'd like to know about. > > Thanks. > > Mark > > > > On Friday, December 11, 2015 4:10 PM, Technion wrote: > > > Hi, > > Given that the Lager readme is pretty good, you can get a pretty good > answer on this by reviewing changes to the readme. > > $ git clone https://github.com/basho/lager.git > $ cd lager > $ git diff 2.0.0rc2 3.0.2 -- README.md > > The diff will include any API changes. > ________________________________________ > From: erlang-questions-bounces@REDACTED > > > on behalf of Siraaj > Khandkar > > Sent: Saturday, 12 December 2015 7:15 AM > To: erlang-questions@REDACTED > Subject: [erlang-questions] lager changelog? > > Is anyone aware of anything like a changelog for lager? I did not see > anything obvious at https://github.com/basho/lager > > More-specifically, I inherited a project which uses 2.0.0rc1 and am > wondering what surprises and API changes await me if I wanted to upgrade. From ok@REDACTED Mon Dec 21 03:12:04 2015 From: ok@REDACTED (ok@REDACTED) Date: Mon, 21 Dec 2015 15:12:04 +1300 Subject: [erlang-questions] Streaming a folder from one node to another In-Reply-To: <1553210.s4dz6Syv2O@changa> References: <1553210.s4dz6Syv2O@changa> Message-ID: Re streaming an arbitrarily large folder from one node to another in a cluster: 1. Why is it necessary to do this? Why is it impossible or undesirable to just serve blocks from the files to the other node on demand? 2. You seem to be suggesting that it is necessary to communicate the whole thing in one go. Why is it not possible to set up a common initial state and stream changes? 3. What are you going to do if the contents of the folder change faster than you can stream them? 4. I'm wondering about the cost of streaming vs the cost of simply switching the storage device from one node's control to another's. (In the extreme, getting a human to swap cables. I mean, that could take just seconds. Electronic switching would likely be better. And yes, I'm thinking of old "truck of tapes" ideas.) Surely the cluster is equipped to switch devices from one CPU board to another in order to handle CPU board failure... From mangalaman93@REDACTED Mon Dec 21 14:11:59 2015 From: mangalaman93@REDACTED (aman mangal) Date: Mon, 21 Dec 2015 13:11:59 +0000 Subject: [erlang-questions] State Management Problem In-Reply-To: <1556449.EQOxXpbHRe@changa> References: <4888947.eaRJizcy0y@changa> <1556449.EQOxXpbHRe@changa> Message-ID: I think I see your point. I will definitely think about it using all the knowledge that I have. Databases do solve the problem but I think that for real time applications, going to a database would be slow. The part to think about for me, would be to come up with a useful API. Thanks for your inputs. Aman On Sat, Dec 19, 2015 at 4:36 AM zxq9 wrote: > On 2015?12?19? ??? 07:29:48 you wrote: > > I think, that makes sense. We have to make trade-offs to make a system > > practical. > > > > Though, the idea that I was pointing to is that, just like FLP proof > > states that > > there is no way to know if a process has really failed or the messages > are > > just delayed but Erlang provides a practical and working way to deal with > > it. Similarly, can a language or standard libraries like OTP give us > > practical ways to achieve trade-offs among Consistency, Availability and > > Partition Tolerance? I feel that the same problem (the CAP trade-off) is > > being solved in each system separately. > > > > As far as I understand, Joe Armstrong, in his thesis, argues that a > > language can provide constructs to achieve reliability and that's how > > Erlang came into picture. I wonder whether CAP trade-offs can also be > > exposed using some standard set of libraries/language. > > Keep two (and a half things) in mind. > > 1. Joe was addressing a specific type of fault tolerance. There are many > other kinds than transient bugs. This approach does nothing to magically > fix hardware failures, for example. > > 2. The original idea does not appear to have been focused on massively > distributed applications, but rather on making concurrent processes be the > logical unit of computation instead of memory-shared objects. (Hence > "concurrency oriented programming" instead of "object oriented > programming".) > > 2.5 This was done indepenently of "the actor model". Today that's a nice > buzzword, and Erlang is very nearly the actor model, but that wasn't the > point. > > This was achieved by forcing a large number of concurrency tradeoffs onto > the programmer from the start, tradeoffs that are normally left up to the > programmer: how to queue messages; what "message sequence" means (and > doesn't mean); whether there is a global "tic" all processes can refer to; > if shared memory, message queues, and member function calls are all a big > mixed bag of signaling possibilities or not; how semaphore/lock centric > code should be; whether messaging was fundamentally asynchronous or > synchronous; how underlying resources are scheduled; etc. > > The tradeoffs that were made do not suit every use case -- but it turns > out that most of the tradeoffs suit a surprisingly broad variety of (most?) > programming needs. TBH, a lot of the magical good that is in Erlang seems > to have been incidental. (Actually, most of the good things that are in the > world at all seem to have been incidental.) > > Point 2.5 is interesting to me personally because I had very nearly > written a crappier version of OTP myself in (mostly) Python based around > system processes and sockets to support a large project before realizing I > could instead simplify things dramatically by just using Erlang. Joe's idea > and Carl Hewitt's "Actor Model" were developed independently of one another > but take a *very* similar approach -- one in the interest of real-world > practicality, the other in the interest of developing an academically pure > model of computation. This indicates to me that the basic idea of > process-based computation lies very close to an underlying fundamental > truth about information constructs. That I felt compelled to architect a > system in a very similar manner after running into several dead-ends with > regard to both concurrency-imposed complexity *and* fault tolerance (in > ignorance of both OTP and the Actor Model) only cements this idea in my > mind for firmly. > > But none of this addresses distributed state. > > There are a million aspects here that have to be calibrated to a > particular project before you can even have a meaningful idea what > "distributed state" means in any practical sense. This is because most of > the state in most distributed systems don't matter to the *entire* system > at any given moment. For example, in a game server do we care what the > "global state" is? Ever? Of course not. We only care that enough related > state is consistent enough that gameplay meets expectations. Here we > consciously choose to accept some forms of inconsistency in return for > lower latency (higher availability). If one node crashes we'll have to > restart it but it shouldn't affect the rest of the system. Let me back up > and make this example more concrete, because games today often represent an > abnormally complete microcosm (and many of them are designed poorly, but > there is a lot to learn from various missteps in games). > > We have mobs, players, items, chat services, financial services, a user > identity service, a related online forum, a purchasing/information website, > a web interface for game ranking and character stats, the map, and a > concept of "live zones". > > Mobs are basically non-essential in that they will respawn on a timer if > they get killed or crash but their respawn state is always known. > > Player characters are non-essential, but in a different way: their restart > *state* must be transactionally guaranteed -- they can't just lose all > their items or score or levels or whatever when they die or if a managing > process crashes. > > Item trade *must* be given transactional guarantees, otherwise duplicates > can occur in trade or if two players pick up an item "at the same time", > duplicate references can occur, or items might disappear from the game > entirely (a worse outcome of the same bug -- parts of the game may break in > fundamental ways). > > Chat is global, but totally asynch. Its also non-critical. If clients see > messages chat works. If they don't its down, or nobody is chatting. > Gameplay is utterly unaffected. > > Finance requires firm transactional guarantees (and a whole universe-worth > of political comedy). > > User identities are really *account* identities, and so storage requires > transactional guarantees, but login status requires a separate approach > (stateless, authentication/verification based or whatever). > > Online forums are forums. This is a solved problem, right? Sure, maybe. > But in what ways has it been "solved"? This is itself a whole separate > world of approaches, backends, frontends, authentication bridges (because > this must tie into the existing user identity system), etc. > > Web interface for stats, scores and ranks appears similar to the forums > issue at first, but the data it will draw from and display is *completely* > game related. At what point should this update? How closely should changes > in the game be reflected here? Is the overhead worth making it "live"? Are > players the ones who check their own pages, or do others? Etc. A whole > world of totally different tradeoffs apply here -- and they may be critical > to the community model that drives the core game business. (ouch!) > > The base map data itself is usually static and globally known -- to the > point that every game client has a copy of it; awesome for performance but > means map updates are *game client* updates. Anything special that is > occurring in a map zone at a given moment is a function of map data overlay > local to a zone (typically) -- and that can change on the fly, but only > temporarily while that zone's host node is alive. > > The map is divided into zones, and each zone is tied to its host node. If > a node goes down that "region" is inaccessible for the moment (and > everything in it will crash and need to respawn elsewhere), but but > isolates faults by region which makes reasoning about it for players and > developers fairly easy. > > ...And so on, and so on, et cetera, et cetera... > > Think through how distributed state works in each of these cases. There > are tons of cases, and each one requires different tradeoffs. How would you > go about writing a *generic* library that would provide a useful > abstraction of *all* of those cases (and all the ones I left out -- game > servers are complicated!). > > Much of this is not language-specific, or even framework specific. Because > programming at that level is all about what the data is doing while it is > alive. In fact, I argue (and *do* argue this, passionately at times) that > while the basic infrastructure of the system is usually best handled in > Erlang, Erlang is *not* the best tool for *every* part of the system. This > is also true of data storage. > > Most of the issues involving distributed state in a system cross the > barrier between hardware and software, and are trickier to solve in generic > ways. That's why hardcore data people still talk in terms of spindles and > read VS write latency instead of just generally talking about iops (unless > they are stuck in a cloud service, in which case their game service is > nearly guaranteed to fail for lack of the remotest clue what their SLA > means in terms of actually trying to play the game). That's why when you > buy EMC or NetApp services you have to configure the system to provide the > sort of performance tradeoffs that fit your system instead of just buying > them like they were refrigerators. And this is just the hardware side of > things. Trying to write a generic system to handle distributed state in > software would require knowing quite a bit about the hardware or else > choices would be made totally in the blind. > > The result is that OTP handles state in terms of "safe state to recover > and restart", and that's about it. The rest is up to you. Outside of Erlang > we have database systems that have chosen various tradeoffs already, and we > can pick from them where necessary. Some systems take the "transactions or > bust, latency be damned" approach, other databases make the "conflicts will > happen, or they won't and messages will be received, or they won't, but who > cares, because PERFORMANCE!!!" approach, etc. > > Its not just that nobody can pick the perfect CAP tradeoff for all cases. > Even more fundamentally, there is no single database system that is a > perfect solution for all types of data. Sure, you can emulate any type of > data in a full-service RDBMS, but sometimes its better to have your > canonical, normalized storage in Postgres but be asking questions to a > graphing database that is optimized for graph queries, or making text > search queries to a system optimized for that, or "schemaless" document > databases that don't really provide guarantees of any sort (so you check > yourself in other code) but are super fast and easy to understand for some > specific case (like youtube discussion comments that aren't of critical > importance to humanity). This difference is what underlies thinking in data > warehousing and archive retrieval (when people actually think, that is). > > If we can't even pick a "generic database" then its impossible to imagine > that any language runtime could come batteries included with a "generic > distributed state management framework". That would be a real trick indeed. > If you happen to figure out how to swing that, give me a call -- I'll help > you implement it and we'll be super famous, if not fabulously wealthy. > > Just to give yourself some brain food -- try describing what the API or > interface to such a system would even look like. If that's hard to even > imagine then the problem is usually more fundamental to the problem domain. > > -Craig > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Mon Dec 21 14:56:26 2015 From: zxq9@REDACTED (zxq9) Date: Mon, 21 Dec 2015 22:56:26 +0900 Subject: [erlang-questions] State Management Problem In-Reply-To: References: <1556449.EQOxXpbHRe@changa> Message-ID: <2933780.VmMeTyWdvx@changa> On 2015?12?21? ??? 13:11:59 you wrote: > I think I see your point. I will definitely think about it using all the > knowledge that I have. Databases do solve the problem but I think that for > real time applications, going to a database would be slow. The part to > think about for me, would be to come up with a useful API. Thanks for your > inputs. By the way, I've been down the path you are contemplating. I think everyone needs to do this once to really understand how tools fit problems. That is to say, I will probably never trust a system architect on a multi-faceted project unless they've gone through this once themselves. Now I tend to write an Erlang service as an application service ("Oh no! Sooooo old fashioned!"). Clients perceive that service as the mother of all data operations, but they are high-level operations -- operations where the verb parts are relevant to the high-level business problem and the noun parts are themselves business-problem-level entities. Lower level operations (permanent storage, specialized indexing, specialized queries, etc.) still happen in database type systems behind the scenes. Very often that's Postgres as a canonical store, but the Erlang application service has a "live" cache of whatever data is active and often a request for data never makes it to the database because it already is live in the application server. Sometimes a graphing database is involved for super-fast queries over otherwise difficult queries -- but you might be surprised at how much you can lean on Postgres itself without taking a performance hit (and when you do have slow queries Postgres has a plethora of great tools available to figure things out and tune). Sometimes a separate copy of the data is in a document, text, image, geometric, or graph db (or separate denormalized tables or even specialized tablespaces) because certain searches are just plain hard to optimize for in the general case. When you really need this sort of extra help, though, it is usually painfully obvious -- so never start out that way, instead get the business-level logic right first. Typically clients deal with a *very* small fraction of the total data in the storage backend at any given moment, and they keep hitting that tiny fraction over and over. This tends to be true whether its game or business data, actually. (Surprising how similar they are.) Old data tends to not be of any interest, so there is this small percentage of request churn over whatever the going issue of the day happens to be (new contracts, open estimates, current projects, new hr data, some specific financial report, this month's foo, the latest dungeon content, highest-ranking player states, etc.). There are major advantages to warehousing data in a DBA-approved relational sort of way. It makes warehousing issues *much* easier to deal with (denormalizing a copy of the data for some specific purpose, for example) than trying to take everything out of a gigantic K-V store or super-super performant but split personality "webscale" db and normalizing it after the fact (protip: you will have all sorts of random loose ends that are insanely hard to figure out, no matter what sort of "semantic tagging" system you *think* you've invented -- sometimes this is so bad that whatever analytic results your client is trying to discover turn out totally wrong). So it still comes down to different tools for different uses. That active, high-interest data is a *perfect* fit for ETS tables and/or process state (depending on the case -- both are crazy fast, but you don't want every item in a game with 500M active items to be a process). But this data should always be rebuildable from your backend database state that comes from a stodgy old, DBA-endorsed, safety-first sort of database where data security is well understood and large community of experts exists. When you start your application service up, the first requests are what populate your ETS tables and cause processes to spawn. When resources are unneeded those caches shrink (processes exit, tables shrink). Having clients talk to the application service instead of an ORM or the data backends directly lets you forget that there is a MASSIVE PROBLEM with ORM frameworks (for anything more interesting than, say, a blog website framework), because you will still have to write a translation between your application representation and the database representation. This is probably the most tedious part of writing the whole system -- but if you don't do this you will wind up re-inventing every bit of the transactional, relational, navigation-capable, document and object database paradigms all mishmashed together into an incoherent, buggy, half-baked API without realizing it (until its too late to change your mind, that is... *then* you'll certainly realize it). DON'T REINVENT THE DATABASE. But you'll do this once, no matter what I say. Anyone who architects and then implements two huge systems does this once (either the first or the second time). And then realize that an application-service-as-a-datasource can be a very fast, nice thing, and that databases are magical tools that you're really, really glad someone else went to the trouble to write. Also -- none of this *solves* the distributed state problem. But there is hope! As you write systems you'll start feeling out places where it is OK to partition the problem, where temporary inconsistency in the cached data is OK (and where not in the backend datastore), how much write lag is OK between the application service and the backend, what sort of queries are pull-your-hair-out hard or slow or slow-and-hard without a specialized database (hint: text search, image search and graph queries), and other such issues. Also -- what data is just OK to disappear POOF! when something goes wrong (ephemeral chat messages, for example). Its a lot to think through, and whatever tradeoffs you decide fit in specific spots are going to depend entirely on the problem you are trying to solve in *that* part of the system and the user-facing context. This is true in any language and any environment. Anyone who says differently has only ever dealt with trivial data or is a big fat liar trying to get some magical consultancy cash from you. None of this is to intimidate or discourage you. Have fun. (Seriously!) These data conundrums are some of the most delicate and interesting problems you'll ever encounter -- and unlike algorithmic solutions to procedural problems that translate readily to arithmetic, data problems are *never* fully "solved". (Which also means this is a rabbit hole you can lose your entire career/mind in... forever!) -Craig From mark@REDACTED Mon Dec 21 17:37:04 2015 From: mark@REDACTED (Mark Steele) Date: Mon, 21 Dec 2015 11:37:04 -0500 Subject: [erlang-questions] erlang-questions Digest, Vol 249, Issue 1 In-Reply-To: References: Message-ID: > > > ------------------------------ > > Message: 4 > Date: Mon, 21 Dec 2015 15:12:04 +1300 > From: > To: zxq9 > Cc: erlang-questions@REDACTED > Subject: Re: [erlang-questions] Streaming a folder from one node to > another > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > Re streaming an arbitrarily large folder from one node to another > in a cluster: > > 1. Why is it necessary to do this? > Why is it impossible or undesirable to just serve blocks from > the files to the other node on demand? > > To backup data in a sharded distributed system from a central point. > 2. You seem to be suggesting that it is necessary to communicate > the whole thing in one go. Why is it not possible to set up > a common initial state and stream changes? > > Yes I am. There is no shared initial state. > 3. What are you going to do if the contents of the folder change > faster than you can stream them? > > Data files are immutable once written, with a bit of coordination it's possible to get a point in time snapshot. > 4. I'm wondering about the cost of streaming vs the cost of simply > switching the storage device from one node's control to > another's. (In the extreme, getting a human to swap cables. > I mean, that could take just seconds. Electronic switching would > likely be better. And yes, I'm thinking of old "truck of tapes" > ideas.) Surely the cluster is equipped to switch devices from > one CPU board to another in order to handle CPU board failure... > > This speculation is not relevant.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean@REDACTED Mon Dec 21 18:00:02 2015 From: sean@REDACTED (Functional Jobs) Date: Mon, 21 Dec 2015 12:00:02 -0500 Subject: [erlang-questions] New Functional Programming Job Opportunities Message-ID: <5678304c59354@functionaljobs.com> Here are some functional programming job opportunities that were posted recently: Software Engineer (Scala/Play/Scala.js/React) at AdAgility https://functionaljobs.com/jobs/8871-software-engineer-scala-play-scalajs-react-at-adagility Cheers, Sean Murphy FunctionalJobs.com From bdurksen@REDACTED Mon Dec 21 20:21:13 2015 From: bdurksen@REDACTED (Bill Durksen) Date: Mon, 21 Dec 2015 14:21:13 -0500 Subject: [erlang-questions] Singleton question or design question for transfer file test program Message-ID: <56785129.5050506@vaxxine.com> I am building an Erlang program that copies files between distributed nodes. - the design calls for a Singleton object (or some mechanism) to store a IoDevice value, call it "FdRead", that was returned from a file:open() call. Similarly, I will need to store an "FdWrite" IoDevice value as well. - the one Erlang process for Reading a file will be sending N 1024-byte blocks to the Erlang process for Writing a file. - eventually these will distributed nodes, where each Sent Block will be a single event, as well as each Received Block - it works for the first block, but I haven't figure out how to keep the IoDevice values from one read or write event to the next. I looked at the ets system for storing data in tables and tried a few experiments, but wasn't able to successfully store an IoDevice type in an ets table. My backup plans are: b) throw away the blocking design and just read then entire file in one shot. Same thing for writing. c) close and re-open the files (increase block size) i.e. When writing: - for the first block create: write and save, - and for the remaining blocks: do append and save However the reading version may have performance issues, it would be something like this: - for the first block open, read and send - and for the remaining blocks, open, scan to last block sent + 1, read and send I am looking for answers to the questions below: 1. How can I store an IoDevice type in ets? 2. Is there a way to create Singleton objects in Erlang other than using ets? 3. If I want to keep the N blocks design, and also avoid re-opening the files being read/written each time, can I use have a "curried" function for receiving messages from a distributed node? This last idea just occurred to me, so I put it down in question form. When I have time, I will go back and investigate this idea. Thanks, Bill From kennethlakin@REDACTED Mon Dec 21 21:17:38 2015 From: kennethlakin@REDACTED (Kenneth Lakin) Date: Mon, 21 Dec 2015 12:17:38 -0800 Subject: [erlang-questions] Singleton question or design question for transfer file test program In-Reply-To: <56785129.5050506@vaxxine.com> References: <56785129.5050506@vaxxine.com> Message-ID: <56785E62.3060409@gmail.com> On 12/21/2015 11:21 AM, Bill Durksen wrote: > I looked at the ets system for storing data in tables and tried a few > experiments, but wasn't able to successfully store an IoDevice type in > an ets table. Did you see this note in the documentation for file:open/2 ? "IoDevice is really the pid of the process which handles the file. This process is linked to the process which originally opened the file. If any process to which the IoDevice is linked terminates, the file will be closed and the process itself will be terminated. An IoDevice returned from this call can be used as an argument to the IO functions (see io(3))." This works just fine: 1> Tab=ets:new(tab, []). 20496 2> {ok, F}=file:open("file.mp3", [read]). {ok,<0.39.0>} 3> ets:insert(Tab, {file, F}). true 4> ets:lookup_element(Tab, file, 1). file 5> ets:lookup_element(Tab, file, 2). <0.39.0> 6> file:read(ets:lookup_element(Tab, file, 2), 10). {ok,[73,68,51,3,0,0,0,0,0,86]} -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From andras.boroska@REDACTED Mon Dec 21 21:41:09 2015 From: andras.boroska@REDACTED (=?UTF-8?Q?Boroska_Andr=C3=A1s?=) Date: Mon, 21 Dec 2015 20:41:09 +0000 Subject: [erlang-questions] Singleton question or design question for transfer file test program In-Reply-To: <56785E62.3060409@gmail.com> References: <56785129.5050506@vaxxine.com> <56785E62.3060409@gmail.com> Message-ID: > "2. Is there a way to create Singleton objects in Erlang other than using ets?" Singleton objects are typically registered gen_servers in Erlang. They can be registered locally per node or globally per cluster. You probably want to register the reader and the writer globally and start them on their respective nodes. For development they can be started on the same node as well. For simplicity the IoDevice could be stored in the state of each gen_server. Open the files in the init/1 callback of your gen_server. See also: http://www.erlang.org/doc/design_principles/gen_server_concepts.html Br, Andras -------------- next part -------------- An HTML attachment was scrubbed... URL: From bchesneau@REDACTED Mon Dec 21 21:46:28 2015 From: bchesneau@REDACTED (Benoit Chesneau) Date: Mon, 21 Dec 2015 20:46:28 +0000 Subject: [erlang-questions] ets:safe_fixtable/2 & ets:tab2file/{2, 3} question In-Reply-To: References: <20151217142421.GB64758@fhebert-ltm2.internal.salesforce.com> Message-ID: On Thu, Dec 17, 2015 at 7:44 PM Antonio SJ Musumeci wrote: > What exactly is that you need behaviorally? You could also have a process > which just continuously iterates over the table placing the records into a > rotating `disk_log`. If you include a timestamp or have something to know > precisely which version of the record you can replay the log and recover > the state you want. If you need straight up snapshots then maybe a liberal > select would give you a dump. I don't recall however at what level the > select/match locks and if the table is large it'd be expensive memory wise. > > It's hard to beat a COW setup if you need snapshots. > Right but COW over 1M key in memory could be hard. I need to think more. As an experiment i wrote this small code: https://github.com/barrel-db/memdb which implement MVCC over an ETS table (ordered set). For convenience each keys are for now only binaries though that can change. It allows poor prefix lookup by checking the next key starting with the prefix. Iterators offers a consistant view from a point of time. Multiple readers can consume an iterator while write happen behind. The compaction (removing old revisions) is not implemented. I don't think it will go further though. I got another idea I wanted to test to build a memory database, maximising the use of the processes. Just wanted to share the code. - benoit > On Thu, Dec 17, 2015 at 11:38 AM, Felix Gallo > wrote: > >> You can take advantage of erlang's concurrency to get >> arbitrarily-close-to-redis semantics. >> >> For example, redis's bgsave could be achieved by writing as usual to your >> ets table, but also sending a duplicate message to a gen_server whose job >> it is to keep up to date a second, slave, ets table. That gen_server would >> be the one to provide dumps (via to_dets or whatever other facility). Then >> if it has to pause while it dumps, its message queue grows during the >> duration but eventually flushes out and brings itself back up to date. >> Meanwhile the primary ets replica continues to be usable. >> >> It's not a silver bullet because, like redis, you would still have to >> worry about the pathological conditions, like dumps taking so long that the >> slave gen_server's queue gets out of control, or out of memory conditions, >> etc., etc. But if you feel like implementing paxos or waiting about 3 >> months, you could also generalize the gen_server so that a group of them >> formed a distributed cluster. >> >> F. >> >> >> On Thu, Dec 17, 2015 at 8:16 AM, Benoit Chesneau >> wrote: >> >>> >>> >>> On Thu, Dec 17, 2015 at 5:13 PM Benoit Chesneau >>> wrote: >>> >>>> On Thu, Dec 17, 2015 at 3:24 PM Fred Hebert wrote: >>>> >>>>> On 12/17, Benoit Chesneau wrote: >>>>> >But what happen when I use `ets:tab2file/2` while keys are >>>>> continuously >>>>> >added at the end? When does it stop? >>>>> > >>>>> >>>>> I'm not sure what answer you expect to the question "how can I keep an >>>>> infinitely growing table from taking an infinite amount of time to dump >>>>> to disk" that doesn't require locking it to prevent the growth from >>>>> showing up. >>>>> >>>> >>>> well by keeping a version of the data at some point :) But that's not >>>> how it works unfortunately. >>>> >>>> >>>>> >>>>> Do note that safe_fixtable/2 does *not* prevent new inserted elements >>>>> from showing up in your table -- it only prevents objects from being >>>>> taken out or being iterated over twice. While it's easier to create a >>>>> pathological case with an ordered_set table (keeping adding +1 as keys >>>>> near the end), it is not beyond the realm of possibility to do so with >>>>> other table types (probably with lots of insertions and playing with >>>>> process priorities, or predictable hash sequences). >>>>> >>>>> I don't believe there's any way to lock a public table (other than >>>>> implicit blocking in match and select functions). If I were to give a >>>>> wild guess, I'd say to look at ets:info(Tab,size), and have your >>>>> table-dumping process stop when it reaches the predetermined size or >>>>> meets an earlier exit. This would let you bound the time it takes you >>>>> to >>>>> dump the table, at the cost of possibly neglecting to add information >>>>> (which you would do anyway -- you would just favor older info before >>>>> newer info). This would however imply reimplementing your own tab2file >>>>> functionality. >>>>> >>>>> >>>> Good idea, i need to think a little more about it.. I wish it could be >>>> possible to fork an ets table at some point and only use this snapshot in >>>> memory like REDIS does literally by forking the process when dumping it. >>>> That would be useful... >>>> >>>> Thanks for the answer! >>>> >>>> >>> side note, but i am thinking that selecting keys per batch also limit >>> the possible effects of the concurrent writes since it can work faster that >>> way. though writing to the file is slow. >>> >>> - benoit >>> >>> _______________________________________________ >>> erlang-questions mailing list >>> erlang-questions@REDACTED >>> http://erlang.org/mailman/listinfo/erlang-questions >>> >>> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zandra.hird@REDACTED Tue Dec 22 14:52:52 2015 From: zandra.hird@REDACTED (Zandra Hird) Date: Tue, 22 Dec 2015 14:52:52 +0100 Subject: [erlang-questions] Patch Package OTP 17.5.6.7 Released Message-ID: <567955B4.9090801@ericsson.com> Patch Package: OTP 17.5.6.7 Git Tag: OTP-17.5.6.7 Date: 2015-12-21 Trouble Report Id: OTP-12947, OTP-12969, OTP-13137 Seq num: System: OTP Release: 17 Application: diameter-1.9.2.2 Predecessor: OTP 17.5.6.6 Check out the git tag OTP-17.5.6.7, and build a full OTP system including documentation. Apply one or more applications from this build as patches to your installation using the 'otp_patch_apply' tool. For information on install requirements, see descriptions for each application version below. --------------------------------------------------------------------- --- diameter-1.9.2.2 ------------------------------------------------ --------------------------------------------------------------------- The diameter-1.9.2.2 application can be applied independently of other applications on a full OTP 17 installation. --- Fixed Bugs and Malfunctions --- OTP-12969 Application(s): diameter Fix diameter_watchdog function clause. OTP-12912 introduced an error with accepting transports setting {restrict_connections, false}, causing processes to fail when peer connections were terminated. OTP-13137 Application(s): diameter Fix request table leaks The End-to-End and Hop-by-Hop identifiers of outgoing Diameter requests are stored in a table in order for the caller to be located when the corresponding answer message is received. Entries were orphaned if the handler was terminated by an exit signal as a consequence of actions taken by callback functions, or if callbacks modified identifiers in retransmission cases. --- Improvements and New Features --- OTP-12947 Application(s): diameter Add service_opt() strict_mbit. There are differing opinions on whether or not reception of an arbitrary AVP setting the M-bit is an error. The default interpretation is strict: if a command grammar doesn't explicitly allow an AVP setting the M-bit then reception of such an AVP is regarded as an error. Setting {strict_mbit, false} disables this check. Full runtime dependencies of diameter-1.9.2.2: erts-6.0, kernel-3.0, ssl-5.3.4, stdlib-2.0 --------------------------------------------------------------------- --------------------------------------------------------------------- --------------------------------------------------------------------- From marco.molteni@REDACTED Tue Dec 22 13:38:19 2015 From: marco.molteni@REDACTED (Marco Molteni) Date: Tue, 22 Dec 2015 13:38:19 +0100 Subject: [erlang-questions] interested in rebar3 support for the IDEA Erlang plugin? Contribute to the bounty :-) Message-ID: If you are using the Intellij IDEA Erlang plugin https://github.com/ignatov/intellij-erlang and would like it to support rebar3, give some money :-) We already are at $140 https://www.bountysource.com/issues/28505949-rebar3-support thanks marco.m From t@REDACTED Tue Dec 22 16:20:03 2015 From: t@REDACTED (Tristan Sloughter) Date: Tue, 22 Dec 2015 09:20:03 -0600 Subject: [erlang-questions] interested in rebar3 support for the IDEA Erlang plugin? Contribute to the bounty :-) In-Reply-To: References: Message-ID: <1450797603.290919.474084481.6770430B@webmail.messagingengine.com> $210 :) -- Tristan Sloughter t@REDACTED On Tue, Dec 22, 2015, at 06:38 AM, Marco Molteni wrote: > If you are using the Intellij IDEA Erlang plugin > https://github.com/ignatov/intellij-erlang and would like it to support > rebar3, give some money :-) We already are at $140 > > https://www.bountysource.com/issues/28505949-rebar3-support > > thanks > marco.m > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From rvirding@REDACTED Wed Dec 23 02:13:38 2015 From: rvirding@REDACTED (Robert Virding) Date: Wed, 23 Dec 2015 02:13:38 +0100 Subject: [erlang-questions] State Management Problem In-Reply-To: <2933780.VmMeTyWdvx@changa> References: <1556449.EQOxXpbHRe@changa> <2933780.VmMeTyWdvx@changa> Message-ID: I just want to briefly comment Craig's discussion about some of the reasons behind why Erlang looks like it does. We were out to solve a rather specific problem domain: telecom systems. These had some basic properties which any solution had to tackle: massive concurrency, timing constraints, and fault tolerance where parts were allowed to fail (but not too often of course) as long as the system as a whole did not crash. What became Erlang was our way of the solving the problem. Everything was very problem oriented and we did not have as goals that Erlang should be a functional language or that we should implement the actor model. We knew nothing of the actor model until later when we heard that Erlang implements it. :-) The design of the language evolved together with our ideas on how language features would be used to build systems with these properties, which was the base of what became OTP. Processes were a good solution to concurrency. Asynchronous communication was a natural as it fitted both the problem and was a natural for building non-blocking systems. How else would you do it? The error handling primitives made it easy to take down groups of cooperating processes as well as monitoring them so you could clean-up after them or restart them when necessary. We also had the goal to stick to the basics and to keep the language simple. Simple in the sense that there should be a small number of basic principles, if these are right then the language will be powerful but easy to comprehend and use. Small is good. This we managed to do. Again our goal was to solve the problem, not design a language with a predefined set of primitives. That Erlang/OTP is so successful for many other types of systems is that these also have similar requirements to what we were looking at. We weren't thinking of this at the start. IMAO one reason that Erlang/OTP has taken a long time to become adopted is that in many cases it took designers of these other systems a long to realise that they actually did have these requirements and that Erlang was a good way of meeting them. Simple is difficult, complex is easy. Robert On 21 December 2015 at 14:56, zxq9 wrote: > On 2015?12?21? ??? 13:11:59 you wrote: > > I think I see your point. I will definitely think about it using all the > > knowledge that I have. Databases do solve the problem but I think that > for > > real time applications, going to a database would be slow. The part to > > think about for me, would be to come up with a useful API. Thanks for > your > > inputs. > > By the way, I've been down the path you are contemplating. I think > everyone needs to do this once to really understand how tools fit problems. > That is to say, I will probably never trust a system architect on a > multi-faceted project unless they've gone through this once themselves. > > Now I tend to write an Erlang service as an application service ("Oh no! > Sooooo old fashioned!"). Clients perceive that service as the mother of all > data operations, but they are high-level operations -- operations where the > verb parts are relevant to the high-level business problem and the noun > parts are themselves business-problem-level entities. > > Lower level operations (permanent storage, specialized indexing, > specialized queries, etc.) still happen in database type systems behind the > scenes. Very often that's Postgres as a canonical store, but the Erlang > application service has a "live" cache of whatever data is active and often > a request for data never makes it to the database because it already is > live in the application server. Sometimes a graphing database is involved > for super-fast queries over otherwise difficult queries -- but you might be > surprised at how much you can lean on Postgres itself without taking a > performance hit (and when you do have slow queries Postgres has a plethora > of great tools available to figure things out and tune). Sometimes a > separate copy of the data is in a document, text, image, geometric, or > graph db (or separate denormalized tables or even specialized tablespaces) > because certain searches are just plain hard to optimize for in the general > case. When you really need this sort of extra help, though, it is usually > painfully obvious -- so never start out that way, instead get the > business-level logic right first. > > Typically clients deal with a *very* small fraction of the total data in > the storage backend at any given moment, and they keep hitting that tiny > fraction over and over. This tends to be true whether its game or business > data, actually. (Surprising how similar they are.) Old data tends to not be > of any interest, so there is this small percentage of request churn over > whatever the going issue of the day happens to be (new contracts, open > estimates, current projects, new hr data, some specific financial report, > this month's foo, the latest dungeon content, highest-ranking player > states, etc.). > > There are major advantages to warehousing data in a DBA-approved > relational sort of way. It makes warehousing issues *much* easier to deal > with (denormalizing a copy of the data for some specific purpose, for > example) than trying to take everything out of a gigantic K-V store or > super-super performant but split personality "webscale" db and normalizing > it after the fact (protip: you will have all sorts of random loose ends > that are insanely hard to figure out, no matter what sort of "semantic > tagging" system you *think* you've invented -- sometimes this is so bad > that whatever analytic results your client is trying to discover turn out > totally wrong). > > So it still comes down to different tools for different uses. That active, > high-interest data is a *perfect* fit for ETS tables and/or process state > (depending on the case -- both are crazy fast, but you don't want every > item in a game with 500M active items to be a process). But this data > should always be rebuildable from your backend database state that comes > from a stodgy old, DBA-endorsed, safety-first sort of database where data > security is well understood and large community of experts exists. When you > start your application service up, the first requests are what populate > your ETS tables and cause processes to spawn. When resources are unneeded > those caches shrink (processes exit, tables shrink). > > Having clients talk to the application service instead of an ORM or the > data backends directly lets you forget that there is a MASSIVE PROBLEM with > ORM frameworks (for anything more interesting than, say, a blog website > framework), because you will still have to write a translation between your > application representation and the database representation. This is > probably the most tedious part of writing the whole system -- but if you > don't do this you will wind up re-inventing every bit of the transactional, > relational, navigation-capable, document and object database paradigms all > mishmashed together into an incoherent, buggy, half-baked API without > realizing it (until its too late to change your mind, that is... *then* > you'll certainly realize it). > > DON'T REINVENT THE DATABASE. > > But you'll do this once, no matter what I say. Anyone who architects and > then implements two huge systems does this once (either the first or the > second time). And then realize that an application-service-as-a-datasource > can be a very fast, nice thing, and that databases are magical tools that > you're really, really glad someone else went to the trouble to write. > > Also -- none of this *solves* the distributed state problem. But there is > hope! As you write systems you'll start feeling out places where it is OK > to partition the problem, where temporary inconsistency in the cached data > is OK (and where not in the backend datastore), how much write lag is OK > between the application service and the backend, what sort of queries are > pull-your-hair-out hard or slow or slow-and-hard without a specialized > database (hint: text search, image search and graph queries), and other > such issues. Also -- what data is just OK to disappear POOF! when something > goes wrong (ephemeral chat messages, for example). > > Its a lot to think through, and whatever tradeoffs you decide fit in > specific spots are going to depend entirely on the problem you are trying > to solve in *that* part of the system and the user-facing context. This is > true in any language and any environment. Anyone who says differently has > only ever dealt with trivial data or is a big fat liar trying to get some > magical consultancy cash from you. > > None of this is to intimidate or discourage you. Have fun. (Seriously!) > These data conundrums are some of the most delicate and interesting > problems you'll ever encounter -- and unlike algorithmic solutions to > procedural problems that translate readily to arithmetic, data problems are > *never* fully "solved". (Which also means this is a rabbit hole you can > lose your entire career/mind in... forever!) > > -Craig > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stamarit@REDACTED Wed Dec 23 13:04:40 2015 From: stamarit@REDACTED (Salvador Tamarit) Date: Wed, 23 Dec 2015 13:04:40 +0100 Subject: [erlang-questions] CfP: Workshop on Program Transformation for Programmability in Heterogeneous Architectures (Co-located with CGO16); Deadline Jan 15 Message-ID: ********************************************************************************* PROHA 2016, CALL FOR PAPERS First Workshop on Program Transformation for Programmability in Heterogeneous Architectures http://goo.gl/RzGbzY Barcelona, 12th March 2016, in conjunction with the CGO'16 Conference ********************************************************************************* Important Dates: Paper submission deadline: 15 January 2016 23:59 (UTC) Author notification: 5 February 2016 Final manuscript due: 26 February 2016 Scope: Developing and maintaining high-performance applications and libraries for heterogeneous architectures is a difficult task, usually requiring code transformations performed by an expert. Tools assisting in and, if possible, automating such transformations are of course of great interest. However, such tools require significant knowledge and reasoning capabilities. For example, the former could be a machine-understandable descriptions of what a piece of code is expected to do, while the latter could be a set of transformations and a corresponding logical context in which they are applicable, respectively. Furthermore, strategies to identify the sequence of transformations leading to the best resulting code need to be elaborated. This workshop will focus on techniques and foundations which make it possible to perform source code transformations, which preserve the intended semantics of the original code and improve efficiency, portability or maintainability. The topics of interest for the workshop include, but are not limited to: * Program annotations to capture algorithmic properties and intended code semantics. * Programming paradigms able to express underlying (mathematical) properties of code. * Usage of dynamic and static mechanisms to infer relevant code properties. * Transformations which preserve intended semantics. * Strategies to apply transformations. * Heuristics to guide program transformation and techniques to synthesize / learn these heuristics. * Tools Submission Guidelines: Submissions are to be written in English and not exceed 10 pages, including bibliography. Submissions should be written in ACM double-column format using a 10-point type. Authors should follow the information for formatting ACM SIGPLAN conference papers, which can be found at http://www.sigplan.org/Resources/Author . Authors should submit their papers in pdf format using the EasyChair submission website https://easychair.org/conferences/?conf=proha2016 . Publication: The proceedings will be made publicly available through ArXiV. Workshop Organizers: - Manuel Carro, IMDEA Software Institute and Technical University of Madrid - Colin W. Glass, University of Stuttgart - Jan Kuper, University of Twente - Julio Mari?o, Technical University of Madrid - Lutz Schubert, University of Ulm - Guillermo Vigueras, IMDEA Software Institute - Salvador Tamarit, Technical University of Madrid If you have any questions, please contact the program chair at manuel.carro@REDACTED -------------- next part -------------- An HTML attachment was scrubbed... URL: From kostis@REDACTED Wed Dec 23 16:24:27 2015 From: kostis@REDACTED (Kostis Sagonas) Date: Wed, 23 Dec 2015 17:24:27 +0200 Subject: [erlang-questions] Strange difference between construction and matching of binaries Message-ID: <567ABCAB.60209@cs.ntua.gr> When playing with a new testing tool for Erlang programs, we discovered the following difference between construction and matching of binaries, which, although we understand from an implementation point-of-view, we still find sufficiently weird and worthy of at least some discussion here. The simplest way of describing the difference between construction and matching of binaries is the following interaction with the Erlang shell: ================================================================= Eshell V7.2.1 (abort with ^G) 1> <<42:7>> = <<42:7>>. <<42:7>> 2> <<42:6>> = <<42:6>>. <<42:6>> 3> <<42:5>> = <<42:5>>. ** exception error: no match of right hand side value <<10:5>> ================================================================= For those that find the above surprising, it should be pointed out that the fine reference manual (http://www.erlang.org/doc/reference_manual/expressions.html#bit_syntax) contains the following note: When constructing binaries, if the size N of an integer segment is too small to contain the given integer, the most significant bits of the integer are silently discarded and only the N least significant bits are put into the binary. So, the next line one may want to type in the shell could be: ================================================================= 4> <<42:5>> =:= <<234:5>>. true ================================================================= This may be a bit surprising but is fine in some sense. The problem is that the fine reference manual nowhere explains what happens during matching with segments that either contain concrete values (as in the examples above) or variables that are bound to values that do not fit in the size of their segment. From what can be seen in the above examples, apparently something different happens to these segments when used in matching instead of when used in construction. Now, the problem with this difference between construction and matching of binaries containing values that do not fit in their segments is that it breaks many of the invariants that functional programmers (and their compilers!) expect to hold. For example, the following clause heads are not all the same: foo(<<42:5>>) -> foo(<>) when Int =:= 42 -> foo(Bits) when Bits =:= <<42:5>> -> and, perhaps surprisingly, only the third clause matches with <<10:5>> (as well as <<42:5>>, <<106:5>>, <<234:5>>, ...). I am willing to bet many may find the above as breaking the principle of least astonishment. With this post, I want to initiate some discussion about the above in the hope that we can come up with better semantics and implementation for matching with bound binary segments than the current behavior. (Or at least formally document this difference.) Kostis From ameretat.reith@REDACTED Wed Dec 23 19:16:10 2015 From: ameretat.reith@REDACTED (Ameretat Reith) Date: Wed, 23 Dec 2015 21:46:10 +0330 Subject: [erlang-questions] Strange difference between construction and matching of binaries In-Reply-To: <567ABCAB.60209@cs.ntua.gr> References: <567ABCAB.60209@cs.ntua.gr> Message-ID: <20151223214610.246f16c9@gmail.com> ` 1) AB = <<"AB">>. 2) << AB:1/bytes, $B >> = AB. %% does not match 3) AB = << AB:1/bytes, $B >>. %% will match ` It's another example which I faced some days ago. I think size specifier in matching segment just apply to unbound variables, in another word when construction will happen. I think It's just an implementation issue and a try to avoid making intermediate variables [1] and matching will checked bit-by-bit with referenced variables and finished as soon as It finds bits are not aligned anymore. I very like hear more about this too :) 1: http://www.erlang.org/doc/efficiency_guide/binaryhandling.html#match_context From mjtruog@REDACTED Thu Dec 24 09:51:02 2015 From: mjtruog@REDACTED (Michael Truog) Date: Thu, 24 Dec 2015 00:51:02 -0800 Subject: [erlang-questions] [ANN] CloudI 1.5.1 Released! Message-ID: <567BB1F6.4050205@gmail.com> Download 1.5.1 from http://sourceforge.net/projects/cloudi/files/latest/download (checksums at the bottom of this email) CloudI (http://cloudi.org/) is a "universal integrator" using an Erlang core to provide fault-tolerance with efficiency and scalability. The CloudI API provides a minimal interface to communicate among services so programming language agnostic, database agnostic, and messaging bus agnostic integration can occur. CloudI currently integrates with the programming languages C/C++, Elixir, Erlang, Java, JavaScript, PHP, Perl, Python, and Ruby, the databases PostgreSQL, elasticsearch, Cassandra, MySQL, couchdb, memcached, and riak, the messaging bus ZeroMQ, websockets, and the internal CloudI service bus. HTTP is supported with both cowboy and elli integration. * cloudi_service_monitoring is a new CloudI service that provides metrics based on the Erlang VM and all the running CloudI services with exometer and folsom * cloudi_service_http_rest is a new CloudI service to simplify the creation of HTTP REST API handlers * CloudI service efficiency was improved by switching to maps with 18.x * CloudI now utilizes the new time warp functionality while retaining globally unique transaction ids * The CloudI logging output now has an entry for the function/arity info which is captured in Elixir and can be provided in Erlang (until EEP45) * Javascript CloudI API now works properly with node.js version > 0.12.1 * Loadtests were ran (http://cloudi.org/faq.html#5_LoadTesting) * Bugs were fixed and other improvements were added (see the ChangeLog for more detail) Please mention any problems, issues, or ideas! Thanks, Michael SHA256 CHECKSUMS edb50cf627785d5070496136699be2f122667af0ccc278b0759f53e431d6cf8c cloudi-1.5.1.tar.bz2 (11612 bytes) dfb71406434be2df197841c99371010ee3197a3ef85c0eb2faca59e388161987 cloudi-1.5.1.tar.gz (14284 bytes) From unix1@REDACTED Thu Dec 24 10:45:34 2015 From: unix1@REDACTED (Unix One) Date: Thu, 24 Dec 2015 09:45:34 +0000 Subject: [erlang-questions] Behavior callbacks in edoc output In-Reply-To: References: Message-ID: I guess no news is bad news. I'm going to manually type the callback specifications in, while at the same time looking for an alternative to edoc. Thank you in advance for any insight. On 12/20/2015 05:00 AM, Unix One wrote: > Hello, > > Is it possible to have edoc generate documentation for behavior callbacks? > > I found the following: > > http://erlang.org/pipermail/erlang-patches/2012-July/002912.html > http://erlang.org/pipermail/erlang-questions/2012-August/068722.html > > which reference planned fix, but edoc still generates a mostly blank > behavior output with only the top level module description. Adding a > @doc block before the -callback results in the edoc error about it not > being allowed in module footer. > > It seems like an important piece of documentation to omit from output. > Am I missing something obvious? > > Thank you! > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > From pierrefenoll@REDACTED Thu Dec 24 11:02:16 2015 From: pierrefenoll@REDACTED (Pierre Fenoll) Date: Thu, 24 Dec 2015 11:02:16 +0100 Subject: [erlang-questions] Behavior callbacks in edoc output In-Reply-To: References: Message-ID: Edoc not handling -callback sounds like a bug (or something not implemented yet). You should file a bug at http://bugs.erlang.org/ Cheers, -- Pierre Fenoll On 24 December 2015 at 10:45, Unix One wrote: > I guess no news is bad news. I'm going to manually type the callback > specifications in, while at the same time looking for an alternative to > edoc. > > Thank you in advance for any insight. > > On 12/20/2015 05:00 AM, Unix One wrote: > > Hello, > > > > Is it possible to have edoc generate documentation for behavior > callbacks? > > > > I found the following: > > > > http://erlang.org/pipermail/erlang-patches/2012-July/002912.html > > http://erlang.org/pipermail/erlang-questions/2012-August/068722.html > > > > which reference planned fix, but edoc still generates a mostly blank > > behavior output with only the top level module description. Adding a > > @doc block before the -callback results in the edoc error about it not > > being allowed in module footer. > > > > It seems like an important piece of documentation to omit from output. > > Am I missing something obvious? > > > > Thank you! > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From unix1@REDACTED Thu Dec 24 17:54:56 2015 From: unix1@REDACTED (Unix One) Date: Thu, 24 Dec 2015 16:54:56 +0000 Subject: [erlang-questions] Behavior callbacks in edoc output In-Reply-To: References: Message-ID: On 12/24/2015 02:02 AM, Pierre Fenoll wrote: > Edoc not handling -callback sounds like a bug (or something not > implemented yet). > You should file a bug at http://bugs.erlang.org/ It looks like someone already filed a bug report http://bugs.erlang.org/browse/ERL-66 shortly after my original post to this list. From shiker2003@REDACTED Thu Dec 24 16:23:41 2015 From: shiker2003@REDACTED (=?ISO-8859-1?B?c2hpa2VyMjAwMw==?=) Date: Thu, 24 Dec 2015 23:23:41 +0800 Subject: [erlang-questions] How find the root cause of "Bad value on output port 'efile'" Message-ID: Hi everyone, I developed a tool(using socket to send data) to simulator lots of simulation terminal,which send its fake data to server. I use this tool to perform performance test. log4erl is used to print log message to file. "Bad value on output port 'efile'" comes out ,I have no idea how to find the root cause for it. I tried to get some help from search engine,but it helps little. While I find some posts about "Bad value on output port 'tcp_inet' ", which is caused by bad argument in gen_tcp:send . I guess efile has some thing to do with file operation ,like writing data to file etc. but I didn't operate with file directly. The only one module works with file is log4erl, for this module writes some log file. Another point you need pay attention to is that ,the error is random. Any suggestion or help is appreciate. Best regards, Eric Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lapshin@REDACTED Thu Dec 24 18:19:20 2015 From: max.lapshin@REDACTED (Max Lapshin) Date: Thu, 24 Dec 2015 20:19:20 +0300 Subject: [erlang-questions] How find the root cause of "Bad value on output port 'efile'" In-Reply-To: References: Message-ID: Usually it is because your iolist contains undefined or pid or anything else except list, int (0..255) or binary, -------------- next part -------------- An HTML attachment was scrubbed... URL: From nem@REDACTED Fri Dec 25 02:42:51 2015 From: nem@REDACTED (Geoff Cant) Date: Fri, 25 Dec 2015 14:42:51 +1300 Subject: [erlang-questions] Edump: a new crashdump analysis library for the holidays. Message-ID: Hi all, merry holiday of your choice. This year, I got a nice block of time in New Zealand and wrote a new Erlang Crashdump analysis library that I?ve been meaning to get around to for ages. https://github.com/archaelus/edump is a library for building indexes of crashdump files so that you can have efficient random access to information in arbitrarily large dump files. The index step is a one time cost that saves significant time for subsequent analyses. Edump also provides parsers for most segment types, and analysis code that can reconstruct most process information (process dictionary terms, messages in message queues, values on the call stack and so on). Edump has a few high level analyses - process trees (in graphviz .dot format), process list summaries, system memory summaries and so on. There?s also an escript tool that provides a rudimentary CLI to poking around in crashdumps. It should be useful right now - I?d love github issue reports of dump files it can?t analyse and other bugs. Issues I know about (and would like help with): * edump info is pretty basic. How do you even write CLI tools in Erlang? (escript with getopt seems OK, but I was hoping for a CLI tool framework that would be a bit more comprehensive (how do I have high level tasks that have lots of different options per task?)) Suggestions for a CLI framework would be welcome (I looked at clique, but it didn?t seem quite like me) * More analyses, particularly process graphs. * Better .dot output that draws process trees the way people expect to read them (how do I get the ?ur process? ?erlang? at the top of the graph?) * edump doesn?t have a web interface but should. * The build process is a pain (rebar3 compile, rebar3 rdtl, rebar3 escriptize - I don?t know how to make that a one step process). Anyway, I hope you find it useful (or you file a bug :) Happy holidays, -Geoff -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: erl_crash.png Type: image/png Size: 78639 bytes Desc: not available URL: From t@REDACTED Fri Dec 25 02:52:03 2015 From: t@REDACTED (Tristan Sloughter) Date: Thu, 24 Dec 2015 19:52:03 -0600 Subject: [erlang-questions] Edump: a new crashdump analysis library for the holidays. In-Reply-To: References: Message-ID: <1451008323.1977605.475989842.32F162C2@webmail.messagingengine.com> Make `rdtl` a post compile hook. escriptize runs compile automatically so if you add: {provider_hooks, [{post, [{compile, rdtl}]}]}. Then running `rebar3 escriptize` will run compile, rdtl, escriptize. Also without the hook you can do: `rebar3 do compile, rdtl, escriptize` -- Tristan Sloughter t@REDACTED On Thu, Dec 24, 2015, at 07:42 PM, Geoff Cant wrote: > Hi all, merry holiday of your choice. This year, I got a nice block of time in New Zealand and wrote a new Erlang Crashdump analysis library that I?ve been meaning to get around to for ages. > > https://github.com/archaelus/edump is a library for building indexes of crashdump files so that you can have efficient random access to information in arbitrarily large dump files. The index step is a one time cost that saves significant time for subsequent analyses. > > Edump also provides parsers for most segment types, and analysis code that can reconstruct most process information (process dictionary terms, messages in message queues, values on the call stack and so on). Edump has a few high level analyses - process trees (in graphviz .dot format), process list summaries, system memory summaries and so on. There?s also an escript tool that provides a rudimentary CLI to poking around in crashdumps. > > > > It should be useful right now - I?d love github issue reports of dump files it can?t analyse and other bugs. > > Issues I know about (and would like help with): > * edump info is pretty basic. How do you even write CLI tools in Erlang? (escript with getopt seems OK, but I was hoping for a CLI tool framework that would be a bit more comprehensive (how do I have high level tasks that have lots of different options per task?)) Suggestions for a CLI framework would be welcome (I looked at clique, but it didn?t seem quite like me) > * More analyses, particularly process graphs. > * Better .dot output that draws process trees the way people expect to read them (how do I get the ?ur process? ?erlang? at the top of the graph?) > * edump doesn?t have a web interface but should. > * The build process is a pain (rebar3 compile, rebar3 rdtl, rebar3 escriptize - I don?t know how to make that a one step process). > > > Anyway, I hope you find it useful (or you file a bug :) > > Happy holidays, > -Geoff > > _________________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > Email had 1 attachment: > * erl_crash.png > ? 105k (image/png) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: erl_crash.png Type: image/png Size: 78639 bytes Desc: not available URL: From ka8725@REDACTED Sun Dec 27 04:38:31 2015 From: ka8725@REDACTED (Andrey Koleshko) Date: Sun, 27 Dec 2015 06:38:31 +0300 Subject: [erlang-questions] Question about reverse list of recursion functions Message-ID: Hi, guys! I recently started to learn Erlang (migrating from Ruby) by reading ?Erlang Programming? Cesarini book. After the 3rd chapter there is a task to implement quick sort and I implemented it like this: https://gist.github.com/ka8725/f3fcc264e12bcefa6035 It works well, but I have a question that doesn?t allow to sleep me - is it normal practice to do `reverse` every time when you have an list with opposed direction after a few recursions calls? Is there are other practice to avoid it? Thanks! ________________ Best Regards, Andrey Koleshko ka8725@REDACTED -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivan@REDACTED Sun Dec 27 10:10:57 2015 From: ivan@REDACTED (Ivan Uemlianin) Date: Sun, 27 Dec 2015 09:10:57 +0000 Subject: [erlang-questions] Question about reverse list of recursion functions In-Reply-To: References: Message-ID: <1451207459403-0089ce93-dc01c501-d1b125a2@llaisdy.com> Dear Andrey Using lists:reverse/1 is normal practice. The lists functions are implemented in C and are fast. I think some of them also do non-erlangy cheats, eg, reverse reverses the list "in place". Best wishes Ivan On Sun, Dec 27, 2015 at 3:38 am, Andrey Koleshko wrote: Hi, guys! I recently started to learn Erlang (migrating from Ruby) by reading ?Erlang Programming? Cesarini book. After the 3rd chapter there is a task to implement quick sort and I implemented it like this: [https://gist.github.com/ka8725/f3fcc264e12bcefa6035] https://gist.github.com/ [https://gist.github.com/] ka8725 /f3fcc264e12bcefa6035 It works well, but I have a question that doesn?t allow to sleep me - is it normal practice to do ` reverse ` every time when you have an list with opposed direction after a few recursions calls? Is there are other practice to avoid it? Thanks! ________________ Best Regards, Andrey Koleshko ka8725@REDACTED [ka8725@REDACTED] -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmkolesnikov@REDACTED Sun Dec 27 11:12:58 2015 From: dmkolesnikov@REDACTED (Dmitry Kolesnikov) Date: Sun, 27 Dec 2015 12:12:58 +0200 Subject: [erlang-questions] Question about reverse list of recursion functions In-Reply-To: References: Message-ID: <76D6D62B-869B-4FD9-910B-6582AC5E4404@gmail.com> Hello, The reverse is acceptable if you are using lists:reverse/1 (this is built-in function written in C). There is an alternative, you can use lists:foldr function to avoid unnecessary reverse operation: ``` divide_to_lesser_and_bigger(List, X) -> lists:foldr(fun(E, Acc) -> lesser_or_bigger(E, X, Acc) end, {[], []}, List). lesser_and_bigger(E, X, {Lesser, Bigger}) when E < X -> {[E|Lesser], Bigger}; lesser_and_bigger(E, _, {Lesser, Bigger}) -> {Lesser, [E|Bigger]}. ``` - Dmitry > On Dec 27, 2015, at 5:38 AM, Andrey Koleshko wrote: > > Hi, guys! I recently started to learn Erlang (migrating from Ruby) by reading ?Erlang Programming? Cesarini book. After the 3rd chapter there is a task to implement quick sort and I implemented it like this: https://gist.github.com/ka8725/f3fcc264e12bcefa6035 > It works well, but I have a question that doesn?t allow to sleep me - is it normal practice to do `reverse` every time when you have an list with opposed direction after a few recursions calls? Is there are other practice to avoid it? Thanks! > ________________ > Best Regards, > Andrey Koleshko > ka8725@REDACTED > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From nx@REDACTED Sun Dec 27 20:31:51 2015 From: nx@REDACTED (nx) Date: Sun, 27 Dec 2015 19:31:51 +0000 Subject: [erlang-questions] User Account Management Message-ID: Is there a solution out there for representing users and organizations similar to how Github contexts work? I'm wanting to implement a REST identity provider service (or use a pre-existing one) to manage users accounts and authentication. -------------- next part -------------- An HTML attachment was scrubbed... URL: From erlang@REDACTED Sun Dec 27 22:57:53 2015 From: erlang@REDACTED (Joe Armstrong) Date: Sun, 27 Dec 2015 22:57:53 +0100 Subject: [erlang-questions] Question about reverse list of recursion functions In-Reply-To: <76D6D62B-869B-4FD9-910B-6582AC5E4404@gmail.com> References: <76D6D62B-869B-4FD9-910B-6582AC5E4404@gmail.com> Message-ID: On Sun, Dec 27, 2015 at 11:12 AM, Dmitry Kolesnikov wrote: > Hello, > > The reverse is acceptable if you are using lists:reverse/1 (this is built-in function written in C). > > There is an alternative, you can use lists:foldr function to avoid unnecessary reverse operation: Which introduces an unnecesary tuple construction and deconstruction which may or may not be faster... At the end of the day an algorithm is fast enough or not so - you have to measure to find out. You should choose the simplest correct algorithm that is sufficiently fast. Cheers /Joe > > ``` > > divide_to_lesser_and_bigger(List, X) -> > lists:foldr(fun(E, Acc) -> lesser_or_bigger(E, X, Acc) end, {[], []}, List). > > lesser_and_bigger(E, X, {Lesser, Bigger}) when E < X -> > {[E|Lesser], Bigger}; > lesser_and_bigger(E, _, {Lesser, Bigger}) -> > {Lesser, [E|Bigger]}. > > ``` > > - Dmitry > > > >> On Dec 27, 2015, at 5:38 AM, Andrey Koleshko wrote: >> >> Hi, guys! I recently started to learn Erlang (migrating from Ruby) by reading ?Erlang Programming? Cesarini book. After the 3rd chapter there is a task to implement quick sort and I implemented it like this: https://gist.github.com/ka8725/f3fcc264e12bcefa6035 >> It works well, but I have a question that doesn?t allow to sleep me - is it normal practice to do `reverse` every time when you have an list with opposed direction after a few recursions calls? Is there are other practice to avoid it? Thanks! >> ________________ >> Best Regards, >> Andrey Koleshko >> ka8725@REDACTED >> >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From cian@REDACTED Sun Dec 27 23:13:12 2015 From: cian@REDACTED (Cian Synnott) Date: Sun, 27 Dec 2015 22:13:12 +0000 Subject: [erlang-questions] Question about reverse list of recursion functions In-Reply-To: References: <76D6D62B-869B-4FD9-910B-6582AC5E4404@gmail.com> Message-ID: On Sun, Dec 27, 2015 at 9:57 PM, Joe Armstrong wrote: > At the end of the day an algorithm is fast enough or not so - you have to > measure to find out. You should choose the simplest correct algorithm > that is sufficiently fast. > I did some simple measurement of lists:reverse/1 performance about this time last year in response to some questions on #erlang: http://emauton.org/2015/01/25/lists:reverse-1-performance-in-erlang/ TL;DR: "don't worry": assuming we're doing something interesting when we process each list element, lists:reverse/1 is unlikely to dominate our runtime. Cian From gomoripeti@REDACTED Mon Dec 28 02:23:16 2015 From: gomoripeti@REDACTED (=?UTF-8?B?UGV0aSBHw7Ztw7ZyaQ==?=) Date: Mon, 28 Dec 2015 02:23:16 +0100 Subject: [erlang-questions] Strange difference between construction and matching of binaries In-Reply-To: <20151223214610.246f16c9@gmail.com> References: <567ABCAB.60209@cs.ntua.gr> <20151223214610.246f16c9@gmail.com> Message-ID: I'd just like to mention this related bug-report http://bugs.erlang.org/browse/ERL-44 reported by yet another surprised person. On Wed, Dec 23, 2015 at 7:16 PM, Ameretat Reith wrote: > ` > 1) AB = <<"AB">>. > 2) << AB:1/bytes, $B >> = AB. %% does not match > 3) AB = << AB:1/bytes, $B >>. %% will match > ` > > It's another example which I faced some days ago. I think size > specifier in matching segment just apply to unbound variables, in > another word when construction will happen. > > I think It's just an implementation issue and a try to avoid making > intermediate variables [1] and matching will checked bit-by-bit with > referenced variables and finished as soon as It finds bits are not > aligned anymore. I very like hear more about this too :) > > 1: > > http://www.erlang.org/doc/efficiency_guide/binaryhandling.html#match_context > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmkolesnikov@REDACTED Mon Dec 28 00:56:20 2015 From: dmkolesnikov@REDACTED (Dmitry Kolesnikov) Date: Mon, 28 Dec 2015 01:56:20 +0200 Subject: [erlang-questions] Question about reverse list of recursion functions In-Reply-To: References: <76D6D62B-869B-4FD9-910B-6582AC5E4404@gmail.com> Message-ID: Hello Joe, Thank you for pointing this out! I was wrong! lists:foldr look lucrative but tuple creation kills performance. I?ve played around with various versions of quick sort implementations, measured its performance on list of 10K elements. I?ve tried to replace tuple accumulator with list accumulator but ? As the conclusion, the original implementation that uses lists:reverse/1 is the best one in terms of performance. Here is the visualized results of benchmark and code. -------------- next part -------------- A non-text attachment was scrubbed... Name: summary.png Type: image/png Size: 254310 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: qsort.erl Type: application/octet-stream Size: 2154 bytes Desc: not available URL: -------------- next part -------------- Please note that graph shows time in microseconds event the label says (ms). The legend is following: sortl - quick sort uses lists:reverse sortf - quick sort uses lists:foldr and tuple as accumulator sortfl - quick sort uses lists:foldr and list [Lesser|Bigger] as accumulator sortx - quick sort uses list replacement using [Lesser|Bigger] as accumulator I think the sortl is the fastest because it uses tail-recursion while other using normal recursion. Best Regards, Dmitry > On Dec 27, 2015, at 11:57 PM, Joe Armstrong wrote: > > On Sun, Dec 27, 2015 at 11:12 AM, Dmitry Kolesnikov > wrote: >> Hello, >> >> The reverse is acceptable if you are using lists:reverse/1 (this is built-in function written in C). >> >> There is an alternative, you can use lists:foldr function to avoid unnecessary reverse operation: > > Which introduces an unnecesary tuple construction and deconstruction > which may or may not be faster... > > At the end of the day an algorithm is fast enough or not so - you have to > measure to find out. You should choose the simplest correct algorithm > that is sufficiently fast. > > Cheers > > /Joe > > >> >> ``` >> >> divide_to_lesser_and_bigger(List, X) -> >> lists:foldr(fun(E, Acc) -> lesser_or_bigger(E, X, Acc) end, {[], []}, List). >> >> lesser_and_bigger(E, X, {Lesser, Bigger}) when E < X -> >> {[E|Lesser], Bigger}; >> lesser_and_bigger(E, _, {Lesser, Bigger}) -> >> {Lesser, [E|Bigger]}. >> >> ``` >> >> - Dmitry >> >> >> >>> On Dec 27, 2015, at 5:38 AM, Andrey Koleshko wrote: >>> >>> Hi, guys! I recently started to learn Erlang (migrating from Ruby) by reading ?Erlang Programming? Cesarini book. After the 3rd chapter there is a task to implement quick sort and I implemented it like this: https://gist.github.com/ka8725/f3fcc264e12bcefa6035 >>> It works well, but I have a question that doesn?t allow to sleep me - is it normal practice to do `reverse` every time when you have an list with opposed direction after a few recursions calls? Is there are other practice to avoid it? Thanks! >>> ________________ >>> Best Regards, >>> Andrey Koleshko >>> ka8725@REDACTED >>> >>> >>> >>> _______________________________________________ >>> erlang-questions mailing list >>> erlang-questions@REDACTED >>> http://erlang.org/mailman/listinfo/erlang-questions >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions From dmkolesnikov@REDACTED Mon Dec 28 10:56:57 2015 From: dmkolesnikov@REDACTED (Dmitry Kolesnikov) Date: Mon, 28 Dec 2015 11:56:57 +0200 Subject: [erlang-questions] package escript to deliverable unit Message-ID: <82227887-C9D8-4311-8A2F-DD5E69C808C8@gmail.com> Hello, I am looking a cross-platform solution to share escriptized command line application with non-Erlang colleagues. They are obviously missing Erlang VM and there is very low motivation to compile it from sources. Cross-platform implies MacOS and Linux (Ubuntu and CentOS). Has any one experienced same issue and found the solution? Best Regards, Dmitry From pierrefenoll@REDACTED Mon Dec 28 12:34:49 2015 From: pierrefenoll@REDACTED (Pierre Fenoll) Date: Mon, 28 Dec 2015 12:34:49 +0100 Subject: [erlang-questions] package escript to deliverable unit In-Reply-To: <82227887-C9D8-4311-8A2F-DD5E69C808C8@gmail.com> References: <82227887-C9D8-4311-8A2F-DD5E69C808C8@gmail.com> Message-ID: <5AB15F35-276A-4FF0-B018-0D22F8341C5A@gmail.com> An idea: tar a release (with relx for example) and do some magic trick / a shell script to execute it as an escript (see vm.args)? > On 28 Dec 2015, at 10:56, Dmitry Kolesnikov wrote: > > Hello, > > I am looking a cross-platform solution to share escriptized command line application with non-Erlang colleagues. > They are obviously missing Erlang VM and there is very low motivation to compile it from sources. Cross-platform implies MacOS and Linux (Ubuntu and CentOS). > > Has any one experienced same issue and found the solution? > > Best Regards, > Dmitry > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From erlang@REDACTED Mon Dec 28 19:31:28 2015 From: erlang@REDACTED (Joe Armstrong) Date: Mon, 28 Dec 2015 19:31:28 +0100 Subject: [erlang-questions] Question about reverse list of recursion functions In-Reply-To: References: <76D6D62B-869B-4FD9-910B-6582AC5E4404@gmail.com> Message-ID: On Mon, Dec 28, 2015 at 12:56 AM, Dmitry Kolesnikov wrote: > Hello Joe, > > Thank you for pointing this out! I was wrong! lists:foldr look lucrative but tuple creation kills performance. > > I?ve played around with various versions of quick sort implementations, measured its performance on list of 10K elements. I?ve tried to replace tuple accumulator with list accumulator but ? As the conclusion, the original implementation that uses lists:reverse/1 is the best one in terms of performance. > Excellent - always measure things > Here is the visualized results of benchmark and code. > Please note that graph shows time in microseconds event the label says (ms). > The legend is following: > sortl - quick sort uses lists:reverse > sortf - quick sort uses lists:foldr and tuple as accumulator > sortfl - quick sort uses lists:foldr and list [Lesser|Bigger] as accumulator > sortx - quick sort uses list replacement using [Lesser|Bigger] as accumulator > > I think the sortl is the fastest because it uses tail-recursion while other using normal recursion. As an additional exercise you should measure and compare with lists:sort Using list comprehensions for sorting is mainly for "showing off" - it's very short and elegant code but not efficient. lists:sort special cases sorting short lists and has explicit knowledge of the list ordering so it knows if the lists is in normal or reversed order and makes use of this to speed things up. I might also add that in almost 30 years of Erlang programming I have never had to rewrite code because reversing or sorting lists was too slow. The most usual place to find performance bottleneck has always been getting data from the outside world into Erlang - this usually involves some form of parsing which involves looking at every character - once you've got you data into lists the rest is usually very quick - getting the data into and out of lists from the outside world is slow. Cheers /Joe > > Best Regards, > Dmitry > > >> On Dec 27, 2015, at 11:57 PM, Joe Armstrong wrote: >> >> On Sun, Dec 27, 2015 at 11:12 AM, Dmitry Kolesnikov >> wrote: >>> Hello, >>> >>> The reverse is acceptable if you are using lists:reverse/1 (this is built-in function written in C). >>> >>> There is an alternative, you can use lists:foldr function to avoid unnecessary reverse operation: >> >> Which introduces an unnecesary tuple construction and deconstruction >> which may or may not be faster... >> >> At the end of the day an algorithm is fast enough or not so - you have to >> measure to find out. You should choose the simplest correct algorithm >> that is sufficiently fast. >> >> Cheers >> >> /Joe >> >> >>> >>> ``` >>> >>> divide_to_lesser_and_bigger(List, X) -> >>> lists:foldr(fun(E, Acc) -> lesser_or_bigger(E, X, Acc) end, {[], []}, List). >>> >>> lesser_and_bigger(E, X, {Lesser, Bigger}) when E < X -> >>> {[E|Lesser], Bigger}; >>> lesser_and_bigger(E, _, {Lesser, Bigger}) -> >>> {Lesser, [E|Bigger]}. >>> >>> ``` >>> >>> - Dmitry >>> >>> >>> >>>> On Dec 27, 2015, at 5:38 AM, Andrey Koleshko wrote: >>>> >>>> Hi, guys! I recently started to learn Erlang (migrating from Ruby) by reading ?Erlang Programming? Cesarini book. After the 3rd chapter there is a task to implement quick sort and I implemented it like this: https://gist.github.com/ka8725/f3fcc264e12bcefa6035 >>>> It works well, but I have a question that doesn?t allow to sleep me - is it normal practice to do `reverse` every time when you have an list with opposed direction after a few recursions calls? Is there are other practice to avoid it? Thanks! >>>> ________________ >>>> Best Regards, >>>> Andrey Koleshko >>>> ka8725@REDACTED >>>> >>>> >>>> >>>> _______________________________________________ >>>> erlang-questions mailing list >>>> erlang-questions@REDACTED >>>> http://erlang.org/mailman/listinfo/erlang-questions >>> >>> _______________________________________________ >>> erlang-questions mailing list >>> erlang-questions@REDACTED >>> http://erlang.org/mailman/listinfo/erlang-questions > > From kennethlakin@REDACTED Tue Dec 29 07:28:30 2015 From: kennethlakin@REDACTED (Kenneth Lakin) Date: Mon, 28 Dec 2015 22:28:30 -0800 Subject: [erlang-questions] gen_server vs. gen_fsm: Timeout type discrepancy? Message-ID: <5682280E.2080806@gmail.com> I know that I've looked at the relevant documentation like a million times, but this just hit me: In the documentation for gen_server:handle_*, the Timeout part of the Result is typed as int()>= 0 | infinity However, in the gen_fsm:StateName documentation, Timeout is typed as int()>0 | infinity Given that I kinda expect the main loop of gen_fsm and gen_server to be *really* similar, I would also expect the type for Timeout to be the same for gen_fsm and gen_server. Is the documentation incorrect, or is there a problem with my expectations? If the documentation is incorrect, which definition of Timeout is correct? Thanks! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From paulperegud@REDACTED Tue Dec 29 09:50:28 2015 From: paulperegud@REDACTED (Paul Peregud) Date: Tue, 29 Dec 2015 09:50:28 +0100 Subject: [erlang-questions] gen_server vs. gen_fsm: Timeout type discrepancy? In-Reply-To: <5682280E.2080806@gmail.com> References: <5682280E.2080806@gmail.com> Message-ID: gen_fsm:loop/7 uses receive ... after Time ... end where Time :: int() >= 0 | infinity. So it must be error in gen_fsm:StateName documentation. On Tue, Dec 29, 2015 at 7:28 AM, Kenneth Lakin wrote: > I know that I've looked at the relevant documentation like a million > times, but this just hit me: > > In the documentation for gen_server:handle_*, the Timeout part of the > Result is typed as > > int()>= 0 | infinity > > However, in the gen_fsm:StateName documentation, Timeout is typed as > > int()>0 | infinity > > Given that I kinda expect the main loop of gen_fsm and gen_server to be > *really* similar, I would also expect the type for Timeout to be the > same for gen_fsm and gen_server. Is the documentation incorrect, or is > there a problem with my expectations? > > If the documentation is incorrect, which definition of Timeout is correct? > > Thanks! > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -- Best regards, Paul Peregud +48602112091 From ok@REDACTED Thu Dec 31 01:09:34 2015 From: ok@REDACTED (ok@REDACTED) Date: Thu, 31 Dec 2015 13:09:34 +1300 Subject: [erlang-questions] Question about reverse list of recursion functions In-Reply-To: References: Message-ID: > Hi, guys! I recently started to learn Erlang (migrating from Ruby) by > reading ???Erlang Programming??? Cesarini book. After the 3rd chapter > there is a task to implement quick sort and I implemented it like this: > https://gist.github.com/ka8725/f3fcc264e12bcefa6035 This was an exercise, so whatever you learn from is right. In practice, you should be using the list reversal and sorting functions from the lists: module. Instead of returning a compound data structure, it's often useful to think "continuations": f(...) -> ...; f(...) -> {A,B,C}. g(...) -> {A,B,C} = f(...), h(A, B. C). => g(...) -> f(.....). f(.....) -> ...; f(.....) -> h(A, B, C, ...). In the case of things like sorting, you can think of even and odd layers, working in opposite directions. I'll leave you to work out the details. Once you've eliminated reversing, the next thing would be to reduce concatenation. (The wow-neat version of qsort in Prolog uses list differences for this purpose; Erlang would pass an accumulator, so qsort(List, Acc) === qsort(List) ++ Acc) However, you've missed the REALLY major point about the code, the reason you should never use it. (A) It's quicksort, which has O(N**2) worst case. Yes, I know the books say this will almost never happen, but in practice it happens surprisingly often. As Bentley and McIlroy pointed out in their "Engineering a Quicksort" paper, repeated values can kill a simple- minded quicksort. You need a "fat pivot", that is, you need to split the list THREE ways: part([X|Xs], P, L, E, G, A) -> if X < P -> part(Xs, P, [X|L], E, G, A) ; P < X -> part(Xs, P, L, E, [X|G], A) ; true -> part(Xs, P, L, [X|E], G, A) end; part([], _, L, E, G, A) -> sort(L, E ++ sort(G, A)). sort(List = [_,Pivot|_], Acc) -> part(List, Pivot, [], [], [], Acc); sort(List, Acc) -> List ++ Acc. sort(List) -> sort(List, []). Oh, by the way, you will notice that this does no reversal. Why would it? Yes, the intermediate lists L and G are delivered in reverse order, but we're going to *sort* them, so we don't *care* what order they are in. We would care if we were trying to make a stable sorting routine, but quicksort is not a stable sorting routine anyway. Reversing a list whose order you don't care about is pointless. (B) Even with a "fat pivot" quicksort can still go quadratic. Taking the first element of the list as the pivot means it WILL go quadratic if the input is already sorted or nearly sorted, which turns out to be quite common in practice. This is why array-based quicksorts use median-of-three pivot selection and the Bentley & McIlroy version uses the median of three medians of three for large enough inputs. Many people are impressed by the name "quick"sort and don't look into the background. When Hoare invented quick sort, he (a) already worked out most of the improvements people talk about and (b) KNEW that on a good day with the wind behind it quicksort would do more comparisons than merge sort. BUT he needed to sort a bunch of numbers on a machine with an amount of memory that would disgrace a modern singing greeting card and just didn't have enough memory for workspace. If comparisons are costly, it's easy for a well-written merge sort to beat quicksort. For what it's worth, I have tested the code above but not benchmarked it. From ka8725@REDACTED Thu Dec 31 01:37:04 2015 From: ka8725@REDACTED (Andrey Koleshko) Date: Thu, 31 Dec 2015 03:37:04 +0300 Subject: [erlang-questions] Question about reverse list of recursion functions In-Reply-To: References: Message-ID: <8A49AC37-9D94-401A-8C38-B12B714165C5@gmail.com> I completely understand that it?s artificial task and is not intended to be used in production. And, of course, I understand that I need to use the functions from the lists module. I just faced with the reverse issue a few times in other cases and that?s why this question has arisen. The quick sort function was just an example to demonstrate what I mean. Thanks for the ?continuations? idea - it?s very helpful. Thank you, guys, for your responses. They are very impressing. ________________ Best Regards, Andrey Koleshko ka8725@REDACTED > On Dec 31, 2015, at 3:09 AM, wrote: > >> Hi, guys! I recently started to learn Erlang (migrating from Ruby) by >> reading ???Erlang Programming??? Cesarini book. After the 3rd chapter >> there is a task to implement quick sort and I implemented it like this: >> https://gist.github.com/ka8725/f3fcc264e12bcefa6035 > > This was an exercise, so whatever you learn from is right. > > In practice, you should be using the list reversal and sorting > functions from the lists: module. > > Instead of returning a compound data structure, it's often > useful to think "continuations": > f(...) -> ...; f(...) -> {A,B,C}. > g(...) -> {A,B,C} = f(...), h(A, B. C). > => > g(...) -> f(.....). > f(.....) -> ...; f(.....) -> h(A, B, C, ...). > > In the case of things like sorting, you can think of even and > odd layers, working in opposite directions. I'll leave you > to work out the details. > > Once you've eliminated reversing, the next thing would be > to reduce concatenation. (The wow-neat version of qsort > in Prolog uses list differences for this purpose; Erlang > would pass an accumulator, so > qsort(List, Acc) === qsort(List) ++ Acc) > > However, you've missed the REALLY major point about the > code, the reason you should never use it. > > (A) It's quicksort, which has O(N**2) worst case. > Yes, I know the books say this will almost never happen, > but in practice it happens surprisingly often. > > As Bentley and McIlroy pointed out in their "Engineering > a Quicksort" paper, repeated values can kill a simple- > minded quicksort. You need a "fat pivot", that is, you > need to split the list THREE ways: > > part([X|Xs], P, L, E, G, A) -> > if X < P -> part(Xs, P, [X|L], E, G, A) > ; P < X -> part(Xs, P, L, E, [X|G], A) > ; true -> part(Xs, P, L, [X|E], G, A) > end; > part([], _, L, E, G, A) -> > sort(L, E ++ sort(G, A)). > > sort(List = [_,Pivot|_], Acc) -> > part(List, Pivot, [], [], [], Acc); > sort(List, Acc) -> > List ++ Acc. > > sort(List) -> > sort(List, []). > > > Oh, by the way, you will notice that this does no > reversal. Why would it? Yes, the intermediate > lists L and G are delivered in reverse order, but > we're going to *sort* them, so we don't *care* what > order they are in. We would care if we were trying > to make a stable sorting routine, but quicksort is > not a stable sorting routine anyway. Reversing a > list whose order you don't care about is pointless. > > (B) Even with a "fat pivot" quicksort can still go > quadratic. Taking the first element of the list > as the pivot means it WILL go quadratic if the > input is already sorted or nearly sorted, which turns > out to be quite common in practice. This is why > array-based quicksorts use median-of-three pivot > selection and the Bentley & McIlroy version uses > the median of three medians of three for large enough > inputs. > > Many people are impressed by the name "quick"sort and > don't look into the background. When Hoare invented quick > sort, he (a) already worked out most of the improvements > people talk about and (b) KNEW that on a good day with the > wind behind it quicksort would do more comparisons than > merge sort. BUT he needed to sort a bunch of numbers on > a machine with an amount of memory that would disgrace a > modern singing greeting card and just didn't have enough > memory for workspace. If comparisons are costly, it's > easy for a well-written merge sort to beat quicksort. > > For what it's worth, I have tested the code above but not > benchmarked it. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Catenacci@REDACTED Wed Dec 30 15:45:56 2015 From: Catenacci@REDACTED (Onorio Catenacci) Date: Wed, 30 Dec 2015 09:45:56 -0500 Subject: [erlang-questions] Relx And Rebar3 Message-ID: Hi guys, Dumb question I guess but how are Rebar3 and Relx related? I mean it looks as if Rebar3 is a superset of Relx. Am I understanding that correctly? I'm asking because I've been working on getting the Elixir Exrm batch files working better on Windows. Some of the stuff I've found seems that it may be applicable to Relx as well. Since it looks as if Rebar3 will call Relx should I fork Relx and submit my pull requests there? -- Onorio -------------- next part -------------- An HTML attachment was scrubbed... URL: