From ostinelli@REDACTED Sun Dec 1 14:21:38 2019 From: ostinelli@REDACTED (Roberto Ostinelli) Date: Sun, 1 Dec 2019 14:21:38 +0100 Subject: gen_server locked for some time In-Reply-To: References: Message-ID: Thank you all fo suggestions, will investigate options and profile! Best, r. On Sat, Nov 30, 2019 at 11:50 AM Mikael Pettersson wrote: > On Fri, Nov 29, 2019 at 11:47 PM Roberto Ostinelli > wrote: > > > > All, > > I have a gen_server that in periodic intervals becomes busy, eventually > over 10 seconds, while writing bulk incoming data. This gen_server also > receives smaller individual data updates. > > > > I could offload the bulk writing routine to separate processes but the > smaller individual data updates would then be processed before the bulk > processing is over, hence generating an incorrect scenario where smaller > more recent data gets overwritten by the bulk processing. > > > > I'm trying to see how to solve the fact that all the gen_server calls > during the bulk update would timeout. > > If there is more logic in the gen_server for incoming data, have it > offload all writes to the separate process, using its message queue as > a buffer. Otherwise make the sends to the gen_server asynchronous. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lapshin@REDACTED Mon Dec 2 14:52:44 2019 From: max.lapshin@REDACTED (Max Lapshin) Date: Mon, 2 Dec 2019 16:52:44 +0300 Subject: gen_server locked for some time In-Reply-To: References: Message-ID: In flussonic we use concept of fast and slow servers. Fast gen_server must never get blocked and it is safe to gen_server:call him. Slow can get blocked on disk I/O or some other job. Good idea is when slow can ask fast for some job. Fast can go to slow server, but very carefully: check first if it is safe to make this call. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Mon Dec 2 15:56:53 2019 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Mon, 2 Dec 2019 15:56:53 +0100 Subject: gen_server locked for some time In-Reply-To: References: Message-ID: Another path is to cooperate the bulk write in the process. Write in small chunks and go back into the gen_server loop in between those chunks being written. You now have progress, but no separate process. Another useful variant is to have two processes, but having the split skewed. You prepare iodata() in the main process, and then send that to the other process as a message. This message will be fairly small since large binaries will be transferred by reference. The queue in the other process acts as a linearizing write buffer so ordering doesn't get messed up. You have now moved the bulk write call out of the main process, so it is free to do other processing in between. You might even want a protocol between the two processes to exert some kind of flow control on the system. However, you don't have an even balance between the processes. One is the intelligent orchestrator. The other is the worker, taking the block on the bulk operation. Another thing is to improve the observability of the system. Start doing measurements on the lag time of the gen_server and plot this in a histogram. Measure the amount of data written in the bulk message. This gives you some real data to work with. The thing is: if you experience blocking in some part of your system, it is likely there is some kind of traffic/request pattern which triggers it. Understand that pattern. It is often covering for some important behavior among users you didn't think about. Anticipation of future uses of the system allows you to be proactive about latency problems. It is some times better to gate the problem by limiting what a user/caller/request is allowed to do. As an example, you can reject large requests to the system and demand they happen cooperatively between a client and a server. This slows down the client because they have to wait for a server response until they can issue the next request. If the internet is in between, you just injected an artificial RTT + server processing in between calls, implicitly slowing the client down. On Fri, Nov 29, 2019 at 11:47 PM Roberto Ostinelli wrote: > All, > I have a gen_server that in periodic intervals becomes busy, eventually > over 10 seconds, while writing bulk incoming data. This gen_server also > receives smaller individual data updates. > > I could offload the bulk writing routine to separate processes but the > smaller individual data updates would then be processed before the bulk > processing is over, hence generating an incorrect scenario where smaller > more recent data gets overwritten by the bulk processing. > > I'm trying to see how to solve the fact that all the gen_server calls > during the bulk update would timeout. > > Any ideas of best practices? > > Thank you, > r. > -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hello@REDACTED Tue Dec 3 11:56:28 2019 From: hello@REDACTED (Adam Lindberg) Date: Tue, 3 Dec 2019 11:56:28 +0100 Subject: ** exception exit: noconnection Message-ID: Hi! I?m running some tests using distributed Erlang. I set up a cluster of Erlang nodes doing Distributed Systems? stuff, and a hidden node that have a connection to each of the nodes in that cluster. The hidden node orchestrates the test by starting all Erlang nodes as ports. It then starts a process (gen_server) on each node that manipulates stuff on that node. It also loads some mock modules among other things. The hidden node also has some managing gen_servers running locally, which some of the mocks makes RPC calls to from the cluster nodes (to simulate and orchestrate mocked hardware components). Now I wanted to test how my system behaves when killing some random nodes, chaos monkey style. So I picked the easiest option of using rpc:cast(RandomClusterNode, erlang, halt, [137]). However, now my test dies with the following obscure error: ** exception exit: noconnection. This even happens when first spawning a fun that then calls erlang:halt(137) (as to avoid the RPC connection somehow breaking). After searching a bit on the Internet it seems to be some internal uncatchable (!) error generated by Erlang [1][2], but it is not at all clear when it happens, and how to avoid it. After some debugging in the gen_servers running on the hidden node, I can see the error by setting process_flag(trap_exit, true) and printing it in terminate/2 but I still can?t catch it. I can?t even catch it in the shell by enclosing my run in a try-catch block! It?s almost not mentioned at all in the official documentation [3]. Most likely I?m setting up my test nodes and the application/test code in a way that generates this error, but I have no idea what exactly leads to it. I guess I have two problems: 1. What is the error, and how can I handle / avoid it? 2. Why is it not documented? Cheers, Adam [1]: http://erlang.org/pipermail/erlang-questions/2012-April/066219.html [2]: http://erlang.org/pipermail/erlang-questions/2013-April/073246.html [3]: http://erlang.org/doc/getting_started/robustness.html From ostinelli@REDACTED Tue Dec 3 18:48:31 2019 From: ostinelli@REDACTED (Roberto Ostinelli) Date: Tue, 3 Dec 2019 18:48:31 +0100 Subject: exception exit: timeout in gen_server:call Message-ID: All, I ame experiencing the following error when calling a transaction in poolboy as per the README: equery(PoolName, Stmt, Params) -> poolboy:transaction(PoolName, fun(Worker) -> gen_server:call(Worker, {equery, Stmt, Params}) end). ** exception exit: {timeout,{gen_server,call, [keys_db,{checkout,#Ref<0.0.1.156295>,true},5000]}} in function gen_server:call/3 (gen_server.erl, line 212) in call from poolboy:checkout/3 (/home/ubuntu/workspace/myapp/_build/default/lib/poolboy/src/poolboy.erl, line 55) in call from poolboy:transaction/3 (/home/ubuntu/workspace/myapp/_build/default/lib/poolboy/src/poolboy.erl, line 74) The process queue keeps on increasing, and I can see the following: 3> erlang:process_info(whereis(keys_db)). [{registered_name,keys_db}, {current_function,{gen,do_call,4}}, {initial_call,{proc_lib,init_p,5}}, {status,waiting}, {message_queue_len,11906}, {messages,[{'$gen_cast',{cancel_waiting,#Ref<0.0.1.138090>}}, {'$gen_call',{<0.15224.0>,#Ref<0.0.1.139621>}, {checkout,#Ref<0.0.1.139620>,true}}, {'$gen_call',{<0.15139.0>,#Ref<0.0.1.139649>}, {checkout,#Ref<0.0.1.139648>,true}}, {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138159>}}, {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138175>}}, {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138232>}}, {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138252>}}, {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138261>}}, {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138286>}}, {'$gen_call',{<0.15235.0>,#Ref<0.0.1.139774>}, {checkout,#Ref<0.0.1.139773>,true}}, {'$gen_cast',{cancel_waiting,#Ref<0.0.2.77777>}}, {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138318>}}, {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138336>}}, {'$gen_call',{<0.15233.0>,#Ref<0.0.1.139816>}, {checkout,#Ref<0.0.1.139815>,true}}, {'$gen_call',{<0.15245.0>,#Ref<0.0.1.139854>}, {checkout,#Ref<0.0.1.139853>,true}}, {'$gen_call',{<0.15237.0>,#Ref<0.0.2.78173>}, {checkout,#Ref<0.0.2.78172>,...}}, {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138407>}}, {'$gen_call',{<0.15228.0>,...},{...}}, {'$gen_call',{...},...}, {'$gen_call',...}, {...}|...]}, {links,[<0.714.1>,<0.817.1>,<0.947.1>,<0.1015.1>,<0.1045.1>, <0.1048.1>,<0.1038.1>,<0.983.1>,<0.1002.1>,<0.962.1>, <0.877.1>,<0.909.1>,<0.938.1>,<0.892.1>,<0.849.1>,<0.866.1>, <0.832.1>,<0.765.1>,<0.789.1>,<0.804.1>|...]}, {dictionary,[{'$initial_call',{poolboy,init,1}}, {'$ancestors',[pgpool_sup,<0.673.0>]}]}, {trap_exit,true}, {error_handler,error_handler}, {priority,normal}, {group_leader,<0.672.0>}, {total_heap_size,393326}, {heap_size,196650}, {stack_size,33}, {reductions,14837255}, {garbage_collection,[{max_heap_size,#{error_logger => true, kill => true, size => 0}}, {min_bin_vheap_size,46422}, {min_heap_size,233}, {fullsweep_after,10}, {minor_gcs,3}]}, {suspending,[]}] Does someone have an insight of what may be going wrong? I see that the process status is waiting... Thank you, r. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ostinelli@REDACTED Tue Dec 3 19:14:10 2019 From: ostinelli@REDACTED (Roberto Ostinelli) Date: Tue, 3 Dec 2019 19:14:10 +0100 Subject: exception exit: timeout in gen_server:call In-Reply-To: References: Message-ID: Forgot to mention: this happens in a completely random way. On Tue, Dec 3, 2019 at 6:48 PM Roberto Ostinelli wrote: > All, > I ame experiencing the following error when calling a transaction in > poolboy as per the README: > > equery(PoolName, Stmt, Params) -> > poolboy:transaction(PoolName, fun(Worker) -> > gen_server:call(Worker, {equery, Stmt, Params}) > end). > > ** exception exit: {timeout,{gen_server,call, > [keys_db,{checkout,#Ref<0.0.1.156295>,true},5000]}} > in function gen_server:call/3 (gen_server.erl, line 212) > in call from poolboy:checkout/3 (/home/ubuntu/workspace/myapp/_build/default/lib/poolboy/src/poolboy.erl, line 55) > in call from poolboy:transaction/3 (/home/ubuntu/workspace/myapp/_build/default/lib/poolboy/src/poolboy.erl, line 74) > > The process queue keeps on increasing, and I can see the following: > > 3> erlang:process_info(whereis(keys_db)). > [{registered_name,keys_db}, > {current_function,{gen,do_call,4}}, > {initial_call,{proc_lib,init_p,5}}, > {status,waiting}, > {message_queue_len,11906}, > {messages,[{'$gen_cast',{cancel_waiting,#Ref<0.0.1.138090>}}, > {'$gen_call',{<0.15224.0>,#Ref<0.0.1.139621>}, > {checkout,#Ref<0.0.1.139620>,true}}, > {'$gen_call',{<0.15139.0>,#Ref<0.0.1.139649>}, > {checkout,#Ref<0.0.1.139648>,true}}, > {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138159>}}, > {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138175>}}, > {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138232>}}, > {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138252>}}, > {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138261>}}, > {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138286>}}, > {'$gen_call',{<0.15235.0>,#Ref<0.0.1.139774>}, > {checkout,#Ref<0.0.1.139773>,true}}, > {'$gen_cast',{cancel_waiting,#Ref<0.0.2.77777>}}, > {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138318>}}, > {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138336>}}, > {'$gen_call',{<0.15233.0>,#Ref<0.0.1.139816>}, > {checkout,#Ref<0.0.1.139815>,true}}, > {'$gen_call',{<0.15245.0>,#Ref<0.0.1.139854>}, > {checkout,#Ref<0.0.1.139853>,true}}, > {'$gen_call',{<0.15237.0>,#Ref<0.0.2.78173>}, > {checkout,#Ref<0.0.2.78172>,...}}, > {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138407>}}, > {'$gen_call',{<0.15228.0>,...},{...}}, > {'$gen_call',{...},...}, > {'$gen_call',...}, > {...}|...]}, > {links,[<0.714.1>,<0.817.1>,<0.947.1>,<0.1015.1>,<0.1045.1>, > <0.1048.1>,<0.1038.1>,<0.983.1>,<0.1002.1>,<0.962.1>, > <0.877.1>,<0.909.1>,<0.938.1>,<0.892.1>,<0.849.1>,<0.866.1>, > <0.832.1>,<0.765.1>,<0.789.1>,<0.804.1>|...]}, > {dictionary,[{'$initial_call',{poolboy,init,1}}, > {'$ancestors',[pgpool_sup,<0.673.0>]}]}, > {trap_exit,true}, > {error_handler,error_handler}, > {priority,normal}, > {group_leader,<0.672.0>}, > {total_heap_size,393326}, > {heap_size,196650}, > {stack_size,33}, > {reductions,14837255}, > {garbage_collection,[{max_heap_size,#{error_logger => true, > kill => true, > size => 0}}, > {min_bin_vheap_size,46422}, > {min_heap_size,233}, > {fullsweep_after,10}, > {minor_gcs,3}]}, > {suspending,[]}] > > Does someone have an insight of what may be going wrong? I see that the > process status is waiting... > > Thank you, > r. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Tue Dec 3 19:25:46 2019 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Tue, 3 Dec 2019 19:25:46 +0100 Subject: exception exit: timeout in gen_server:call In-Reply-To: References: Message-ID: The keys_db function is currently executing gen:do_call/4 It is blocked on some other process somewhere, waiting for it to respond, so it can respond back to the 11906 requests it has :) Chances are that the next request in the queue will do a gen:do_call/4 for the 11905 there are left in the queue. And in the mean time, 30 new requests arrived, so we are now at 11935. In short, it is likely something is running too slow to handle the current load on the system and furthermore, there is no flow control in the system to make the callers reduce the load. Since it happens randomly, it is likely there is a single large request, done by a single user now and then. And this request stalls the system. Probably enough to have the queue grow and errors start happening. Put a limit on what can be requested, and force the client to cooperate by having it call in with a "plz continue" token. You can also ask the system for a more detailed stack trace, see erlang:process_info/2 and current_stacktrace when it goes bad. This can often tell you which gen_server call is being made and to whom, narrowing down the problem. On Tue, Dec 3, 2019 at 7:14 PM Roberto Ostinelli wrote: > Forgot to mention: this happens in a completely random way. > > On Tue, Dec 3, 2019 at 6:48 PM Roberto Ostinelli > wrote: > >> All, >> I ame experiencing the following error when calling a transaction in >> poolboy as per the README: >> >> equery(PoolName, Stmt, Params) -> >> poolboy:transaction(PoolName, fun(Worker) -> >> gen_server:call(Worker, {equery, Stmt, Params}) >> end). >> >> ** exception exit: {timeout,{gen_server,call, >> [keys_db,{checkout,#Ref<0.0.1.156295>,true},5000]}} >> in function gen_server:call/3 (gen_server.erl, line 212) >> in call from poolboy:checkout/3 (/home/ubuntu/workspace/myapp/_build/default/lib/poolboy/src/poolboy.erl, line 55) >> in call from poolboy:transaction/3 (/home/ubuntu/workspace/myapp/_build/default/lib/poolboy/src/poolboy.erl, line 74) >> >> The process queue keeps on increasing, and I can see the following: >> >> 3> erlang:process_info(whereis(keys_db)). >> [{registered_name,keys_db}, >> {current_function,{gen,do_call,4}}, >> {initial_call,{proc_lib,init_p,5}}, >> {status,waiting}, >> {message_queue_len,11906}, >> {messages,[{'$gen_cast',{cancel_waiting,#Ref<0.0.1.138090>}}, >> {'$gen_call',{<0.15224.0>,#Ref<0.0.1.139621>}, >> {checkout,#Ref<0.0.1.139620>,true}}, >> {'$gen_call',{<0.15139.0>,#Ref<0.0.1.139649>}, >> {checkout,#Ref<0.0.1.139648>,true}}, >> {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138159>}}, >> {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138175>}}, >> {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138232>}}, >> {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138252>}}, >> {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138261>}}, >> {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138286>}}, >> {'$gen_call',{<0.15235.0>,#Ref<0.0.1.139774>}, >> {checkout,#Ref<0.0.1.139773>,true}}, >> {'$gen_cast',{cancel_waiting,#Ref<0.0.2.77777>}}, >> {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138318>}}, >> {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138336>}}, >> {'$gen_call',{<0.15233.0>,#Ref<0.0.1.139816>}, >> {checkout,#Ref<0.0.1.139815>,true}}, >> {'$gen_call',{<0.15245.0>,#Ref<0.0.1.139854>}, >> {checkout,#Ref<0.0.1.139853>,true}}, >> {'$gen_call',{<0.15237.0>,#Ref<0.0.2.78173>}, >> {checkout,#Ref<0.0.2.78172>,...}}, >> {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138407>}}, >> {'$gen_call',{<0.15228.0>,...},{...}}, >> {'$gen_call',{...},...}, >> {'$gen_call',...}, >> {...}|...]}, >> {links,[<0.714.1>,<0.817.1>,<0.947.1>,<0.1015.1>,<0.1045.1>, >> <0.1048.1>,<0.1038.1>,<0.983.1>,<0.1002.1>,<0.962.1>, >> <0.877.1>,<0.909.1>,<0.938.1>,<0.892.1>,<0.849.1>,<0.866.1>, >> <0.832.1>,<0.765.1>,<0.789.1>,<0.804.1>|...]}, >> {dictionary,[{'$initial_call',{poolboy,init,1}}, >> {'$ancestors',[pgpool_sup,<0.673.0>]}]}, >> {trap_exit,true}, >> {error_handler,error_handler}, >> {priority,normal}, >> {group_leader,<0.672.0>}, >> {total_heap_size,393326}, >> {heap_size,196650}, >> {stack_size,33}, >> {reductions,14837255}, >> {garbage_collection,[{max_heap_size,#{error_logger => true, >> kill => true, >> size => 0}}, >> {min_bin_vheap_size,46422}, >> {min_heap_size,233}, >> {fullsweep_after,10}, >> {minor_gcs,3}]}, >> {suspending,[]}] >> >> Does someone have an insight of what may be going wrong? I see that the >> process status is waiting... >> >> Thank you, >> r. >> > -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ostinelli@REDACTED Tue Dec 3 19:37:27 2019 From: ostinelli@REDACTED (Roberto Ostinelli) Date: Tue, 3 Dec 2019 19:37:27 +0100 Subject: exception exit: timeout in gen_server:call In-Reply-To: References: Message-ID: This function nothing is but a single postgres query, a ?select * from users where id = ?123??, properly indexed. The only thing I can see is a latency towards the db (infra-aws regions unfortunately). It really is that at a certain moment, randomly (sometimes after 5 minutes, other times after 2 days) this happens and there?s no recovery whatsoever. I could start rejecting http calls if the poolboy can?t perform a checkout, not sure how to do that (need to check the library better). >> On 3 Dec 2019, at 19:26, Jesper Louis Andersen wrote: > ? > The keys_db function is currently executing gen:do_call/4 > > It is blocked on some other process somewhere, waiting for it to respond, so it can respond back to the 11906 requests it has :) Chances are that the next request in the queue will do a gen:do_call/4 for the 11905 there are left in the queue. And in the mean time, 30 new requests arrived, so we are now at 11935. > > In short, it is likely something is running too slow to handle the current load on the system and furthermore, there is no flow control in the system to make the callers reduce the load. > > Since it happens randomly, it is likely there is a single large request, done by a single user now and then. And this request stalls the system. Probably enough to have the queue grow and errors start happening. Put a limit on what can be requested, and force the client to cooperate by having it call in with a "plz continue" token. > > You can also ask the system for a more detailed stack trace, see erlang:process_info/2 and current_stacktrace when it goes bad. This can often tell you which gen_server call is being made and to whom, narrowing down the problem. > >> On Tue, Dec 3, 2019 at 7:14 PM Roberto Ostinelli wrote: >> Forgot to mention: this happens in a completely random way. >> >>> On Tue, Dec 3, 2019 at 6:48 PM Roberto Ostinelli wrote: >>> All, >>> I ame experiencing the following error when calling a transaction in poolboy as per the README: >>> >>> equery(PoolName, Stmt, Params) -> >>> poolboy:transaction(PoolName, fun(Worker) -> >>> gen_server:call(Worker, {equery, Stmt, Params}) >>> end). >>> ** exception exit: {timeout,{gen_server,call, >>> [keys_db,{checkout,#Ref<0.0.1.156295>,true},5000]}} >>> in function gen_server:call/3 (gen_server.erl, line 212) >>> in call from poolboy:checkout/3 (/home/ubuntu/workspace/myapp/_build/default/lib/poolboy/src/poolboy.erl, line 55) >>> in call from poolboy:transaction/3 (/home/ubuntu/workspace/myapp/_build/default/lib/poolboy/src/poolboy.erl, line 74) >>> The process queue keeps on increasing, and I can see the following: >>> >>> 3> erlang:process_info(whereis(keys_db)). >>> [{registered_name,keys_db}, >>> {current_function,{gen,do_call,4}}, >>> {initial_call,{proc_lib,init_p,5}}, >>> {status,waiting}, >>> {message_queue_len,11906}, >>> {messages,[{'$gen_cast',{cancel_waiting,#Ref<0.0.1.138090>}}, >>> {'$gen_call',{<0.15224.0>,#Ref<0.0.1.139621>}, >>> {checkout,#Ref<0.0.1.139620>,true}}, >>> {'$gen_call',{<0.15139.0>,#Ref<0.0.1.139649>}, >>> {checkout,#Ref<0.0.1.139648>,true}}, >>> {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138159>}}, >>> {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138175>}}, >>> {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138232>}}, >>> {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138252>}}, >>> {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138261>}}, >>> {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138286>}}, >>> {'$gen_call',{<0.15235.0>,#Ref<0.0.1.139774>}, >>> {checkout,#Ref<0.0.1.139773>,true}}, >>> {'$gen_cast',{cancel_waiting,#Ref<0.0.2.77777>}}, >>> {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138318>}}, >>> {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138336>}}, >>> {'$gen_call',{<0.15233.0>,#Ref<0.0.1.139816>}, >>> {checkout,#Ref<0.0.1.139815>,true}}, >>> {'$gen_call',{<0.15245.0>,#Ref<0.0.1.139854>}, >>> {checkout,#Ref<0.0.1.139853>,true}}, >>> {'$gen_call',{<0.15237.0>,#Ref<0.0.2.78173>}, >>> {checkout,#Ref<0.0.2.78172>,...}}, >>> {'$gen_cast',{cancel_waiting,#Ref<0.0.1.138407>}}, >>> {'$gen_call',{<0.15228.0>,...},{...}}, >>> {'$gen_call',{...},...}, >>> {'$gen_call',...}, >>> {...}|...]}, >>> {links,[<0.714.1>,<0.817.1>,<0.947.1>,<0.1015.1>,<0.1045.1>, >>> <0.1048.1>,<0.1038.1>,<0.983.1>,<0.1002.1>,<0.962.1>, >>> <0.877.1>,<0.909.1>,<0.938.1>,<0.892.1>,<0.849.1>,<0.866.1>, >>> <0.832.1>,<0.765.1>,<0.789.1>,<0.804.1>|...]}, >>> {dictionary,[{'$initial_call',{poolboy,init,1}}, >>> {'$ancestors',[pgpool_sup,<0.673.0>]}]}, >>> {trap_exit,true}, >>> {error_handler,error_handler}, >>> {priority,normal}, >>> {group_leader,<0.672.0>}, >>> {total_heap_size,393326}, >>> {heap_size,196650}, >>> {stack_size,33}, >>> {reductions,14837255}, >>> {garbage_collection,[{max_heap_size,#{error_logger => true, >>> kill => true, >>> size => 0}}, >>> {min_bin_vheap_size,46422}, >>> {min_heap_size,233}, >>> {fullsweep_after,10}, >>> {minor_gcs,3}]}, >>> {suspending,[]}] >>> Does someone have an insight of what may be going wrong? I see that the process status is waiting... >>> >>> Thank you, >>> r. > > > -- > J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Tue Dec 3 20:31:41 2019 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Tue, 3 Dec 2019 20:31:41 +0100 Subject: exception exit: timeout in gen_server:call In-Reply-To: References: Message-ID: On Tue, Dec 3, 2019 at 7:37 PM Roberto Ostinelli wrote: > This function nothing is but a single postgres query, a ?select * from > users where id = ?123??, properly indexed. > > If it transfers 2 gigabyte of data, then this single postgres query is going to take some time. If someone is doing updates which require a full table lock on the users table, this query is going to take some time. > The only thing I can see is a latency towards the db (infra-aws regions > unfortunately). It really is that at a certain moment, randomly (sometimes > after 5 minutes, other times after 2 days) this happens and there?s no > recovery whatsoever. > > Other tricks: * If your initial intuitive drill down into the system bears no fruit, start caring about facts. * Measure the maximal latency over the equery call you've made for a 10-15 second period. Plot it. * We are interested in microstutters in the pacing. If they are present, it is likely there is some problem which then suddenly tips the system over. If not, then it is more likely that it is something we don't know. * The database might be fast, but there is still latency to the first byte, and there is the transfer time to the last byte. If a query is 50ms, say, then you are only going to run 20 of those per connection. * Pipeline the queries. A query which waits for an answer affects every sibling query as well. Down the line: * Postgres can log slow queries. Turn that on. * Postgres can log whenever it holds a lock for more than a certain time window. Turn that on. Narrow down where the problem can occur by having systems provide facts to you. Don't go for "what is wrong?" Go for "What would I like to know?". This helps observability (In the Control Theory / Charity Majors sense). -------------- next part -------------- An HTML attachment was scrubbed... URL: From ostinelli@REDACTED Tue Dec 3 20:46:41 2019 From: ostinelli@REDACTED (Roberto Ostinelli) Date: Tue, 3 Dec 2019 20:46:41 +0100 Subject: exception exit: timeout in gen_server:call In-Reply-To: References: Message-ID: > > If it transfers 2 gigabyte of data, then this single postgres query is > going to take some time. > Of course, but this is not the case. Data is a very small packet. If someone is doing updates which require a full table lock on the users > table, this query is going to take some time. > No locks, read only: ?select * from users where id = ?123??. Writes are on user registration only, so irrelevant. Other tricks: > > * If your initial intuitive drill down into the system bears no fruit, > start caring about facts. > * Measure the maximal latency over the equery call you've made for a 10-15 > second period. Plot it. > * We are interested in microstutters in the pacing. If they are present, > it is likely there is some problem which then suddenly tips the system > over. If not, then it is more likely that it is something we don't know. > * The database might be fast, but there is still latency to the first > byte, and there is the transfer time to the last byte. If a query is 50ms, > say, then you are only going to run 20 of those per connection. > * Pipeline the queries. A query which waits for an answer affects every > sibling query as well. > > Down the line: > > * Postgres can log slow queries. Turn that on. > * Postgres can log whenever it holds a lock for more than a certain time > window. Turn that on. > > Narrow down where the problem can occur by having systems provide facts to > you. Don't go for "what is wrong?" Go for "What would I like to know?". > This helps observability (In the Control Theory / Charity Majors sense). > I did most of those. There are no slow queries, the database is literally sleeping. The issue is with the latency and the variance on the latency responses. You nailed it at first: it's a matter of flow control. Simply put, my HTTP server is faster than the queries to the database (due to the extra-latency), even though prepared statements are used. This never occurs on other installations where databases are local, so I simply underestimated this aspect. As per my previous statement, I'm going to add a flow restriction by which if there are no available database workers in the pool, I'll reject the HTTP call (I guess with a 503 or similar). Thank you for your help, r. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ostinelli@REDACTED Tue Dec 3 20:59:08 2019 From: ostinelli@REDACTED (Roberto Ostinelli) Date: Tue, 3 Dec 2019 20:59:08 +0100 Subject: gen_server locked for some time In-Reply-To: References: Message-ID: Thanks for the tips, Max and Jesper. In those solutions though how do you guarantee the order of the call? My main issue is to avoid that the slow process does not override more recent but faster data chunks. Do you pile them up in a queue in the received order and treat them after that? On Mon, Dec 2, 2019 at 3:57 PM Jesper Louis Andersen < jesper.louis.andersen@REDACTED> wrote: > Another path is to cooperate the bulk write in the process. Write in small > chunks and go back into the gen_server loop in between those chunks being > written. You now have progress, but no separate process. > > Another useful variant is to have two processes, but having the split > skewed. You prepare iodata() in the main process, and then send that to the > other process as a message. This message will be fairly small since large > binaries will be transferred by reference. The queue in the other process > acts as a linearizing write buffer so ordering doesn't get messed up. You > have now moved the bulk write call out of the main process, so it is free > to do other processing in between. You might even want a protocol between > the two processes to exert some kind of flow control on the system. > However, you don't have an even balance between the processes. One is the > intelligent orchestrator. The other is the worker, taking the block on the > bulk operation. > > Another thing is to improve the observability of the system. Start doing > measurements on the lag time of the gen_server and plot this in a > histogram. Measure the amount of data written in the bulk message. This > gives you some real data to work with. The thing is: if you experience > blocking in some part of your system, it is likely there is some kind of > traffic/request pattern which triggers it. Understand that pattern. It is > often covering for some important behavior among users you didn't think > about. Anticipation of future uses of the system allows you to be proactive > about latency problems. > > It is some times better to gate the problem by limiting what a > user/caller/request is allowed to do. As an example, you can reject large > requests to the system and demand they happen cooperatively between a > client and a server. This slows down the client because they have to wait > for a server response until they can issue the next request. If the > internet is in between, you just injected an artificial RTT + server > processing in between calls, implicitly slowing the client down. > > > On Fri, Nov 29, 2019 at 11:47 PM Roberto Ostinelli > wrote: > >> All, >> I have a gen_server that in periodic intervals becomes busy, eventually >> over 10 seconds, while writing bulk incoming data. This gen_server also >> receives smaller individual data updates. >> >> I could offload the bulk writing routine to separate processes but the >> smaller individual data updates would then be processed before the bulk >> processing is over, hence generating an incorrect scenario where smaller >> more recent data gets overwritten by the bulk processing. >> >> I'm trying to see how to solve the fact that all the gen_server calls >> during the bulk update would timeout. >> >> Any ideas of best practices? >> >> Thank you, >> r. >> > > > -- > J. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukas@REDACTED Wed Dec 4 08:38:52 2019 From: lukas@REDACTED (Lukas Larsson) Date: Wed, 4 Dec 2019 08:38:52 +0100 Subject: ** exception exit: noconnection In-Reply-To: References: Message-ID: On Tue, Dec 3, 2019 at 11:56 AM Adam Lindberg wrote: > Hi! > > I?m running some tests using distributed Erlang. I set up a cluster of > Erlang nodes doing Distributed Systems? stuff, and a hidden node that have > a connection to each of the nodes in that cluster. The hidden node > orchestrates the test by starting all Erlang nodes as ports. It then starts > a process (gen_server) on each node that manipulates stuff on that node. It > also loads some mock modules among other things. The hidden node also has > some managing gen_servers running locally, which some of the mocks makes > RPC calls to from the cluster nodes (to simulate and orchestrate mocked > hardware components). > > Now I wanted to test how my system behaves when killing some random nodes, > chaos monkey style. So I picked the easiest option of using > rpc:cast(RandomClusterNode, erlang, halt, [137]). However, now my test dies > with the following obscure error: ** exception exit: noconnection. This > even happens when first spawning a fun that then calls erlang:halt(137) (as > to avoid the RPC connection somehow breaking). > > After searching a bit on the Internet it seems to be some internal > uncatchable (!) error generated by Erlang [1][2], but it is not at all > clear when it happens, and how to avoid it. After some debugging in the > gen_servers running on the hidden node, I can see the error by setting > process_flag(trap_exit, true) and printing it in terminate/2 but I still > can?t catch it. I can?t even catch it in the shell by enclosing my run in a > try-catch block! It?s almost not mentioned at all in the official > documentation [3]. Most likely I?m setting up my test nodes and the > application/test code in a way that generates this error, but I have no > idea what exactly leads to it. > > I guess I have two problems: > > 1. What is the error, and how can I handle / avoid it? > I'm not sure, but could it be that your process is linked to a process on the remote side? That what you are getting is a broken link error? > 2. Why is it not documented? > > Cheers, > Adam > > > [1]: http://erlang.org/pipermail/erlang-questions/2012-April/066219.html > [2]: http://erlang.org/pipermail/erlang-questions/2013-April/073246.html > [3]: http://erlang.org/doc/getting_started/robustness.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roger@REDACTED Wed Dec 4 09:33:21 2019 From: roger@REDACTED (Roger Lipscombe) Date: Wed, 4 Dec 2019 08:33:21 +0000 Subject: exception exit: timeout in gen_server:call In-Reply-To: References: Message-ID: On Tue, 3 Dec 2019 at 19:32, Jesper Louis Andersen wrote: > * Measure the maximal latency over the equery call you've made for a 10-15 second period. Plot it. We wrap our database calls (and pool checkouts) in folsom_metrics:histogram_timed_update, which reports percentiles to graphite. Since folsom is (afaict) abandoned, you could see if (e.g.) exometer has something similar. We also report connections-in-use, etc., which is helpful. > * Pipeline the queries. A query which waits for an answer affects every sibling query as well. Other pooling models are available. Pull, rather than push, might help -- because it could pull N pending requests and pipeline them. Can't find the relevant links right now, though. > > Down the line: > > * Postgres can log slow queries. Turn that on. > * Postgres can log whenever it holds a lock for more than a certain time window. Turn that on. > > Narrow down where the problem can occur by having systems provide facts to you. Don't go for "what is wrong?" Go for "What would I like to know?". This helps observability (In the Control Theory / Charity Majors sense). > From hello@REDACTED Wed Dec 4 09:54:39 2019 From: hello@REDACTED (Adam Lindberg) Date: Wed, 4 Dec 2019 09:54:39 +0100 Subject: ** exception exit: noconnection In-Reply-To: References: Message-ID: I have indeed linked processes. I realized that that is why the exception is ?uncatchable? in the shell perhaps. Because the shell process dies because it is linked to my test processes, and the function running the test hasn?t encountered an error yet. Does links in Erlang always crash with {'EXIT', Pid, noconnection} when a node dies? Cheers, Adam > On 4. Dec 2019, at 08:38, Lukas Larsson wrote: > > > > On Tue, Dec 3, 2019 at 11:56 AM Adam Lindberg wrote: > Hi! > > I?m running some tests using distributed Erlang. I set up a cluster of Erlang nodes doing Distributed Systems? stuff, and a hidden node that have a connection to each of the nodes in that cluster. The hidden node orchestrates the test by starting all Erlang nodes as ports. It then starts a process (gen_server) on each node that manipulates stuff on that node. It also loads some mock modules among other things. The hidden node also has some managing gen_servers running locally, which some of the mocks makes RPC calls to from the cluster nodes (to simulate and orchestrate mocked hardware components). > > Now I wanted to test how my system behaves when killing some random nodes, chaos monkey style. So I picked the easiest option of using rpc:cast(RandomClusterNode, erlang, halt, [137]). However, now my test dies with the following obscure error: ** exception exit: noconnection. This even happens when first spawning a fun that then calls erlang:halt(137) (as to avoid the RPC connection somehow breaking). > > After searching a bit on the Internet it seems to be some internal uncatchable (!) error generated by Erlang [1][2], but it is not at all clear when it happens, and how to avoid it. After some debugging in the gen_servers running on the hidden node, I can see the error by setting process_flag(trap_exit, true) and printing it in terminate/2 but I still can?t catch it. I can?t even catch it in the shell by enclosing my run in a try-catch block! It?s almost not mentioned at all in the official documentation [3]. Most likely I?m setting up my test nodes and the application/test code in a way that generates this error, but I have no idea what exactly leads to it. > > I guess I have two problems: > > 1. What is the error, and how can I handle / avoid it? > > I'm not sure, but could it be that your process is linked to a process on the remote side? That what you are getting is a broken link error? > > 2. Why is it not documented? > > Cheers, > Adam > > > [1]: http://erlang.org/pipermail/erlang-questions/2012-April/066219.html > [2]: http://erlang.org/pipermail/erlang-questions/2013-April/073246.html > [3]: http://erlang.org/doc/getting_started/robustness.html > From lukas@REDACTED Wed Dec 4 10:01:28 2019 From: lukas@REDACTED (Lukas Larsson) Date: Wed, 4 Dec 2019 10:01:28 +0100 Subject: ** exception exit: noconnection In-Reply-To: References: Message-ID: On Wed, Dec 4, 2019 at 9:54 AM Adam Lindberg wrote: > Does links in Erlang always crash with {'EXIT', Pid, noconnection} when a > node dies? > Yes, it should be. It is also the reason given in monitor messages. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hello@REDACTED Wed Dec 4 10:58:59 2019 From: hello@REDACTED (Adam Lindberg) Date: Wed, 4 Dec 2019 10:58:59 +0100 Subject: ** exception exit: noconnection In-Reply-To: References: Message-ID: <266CA31E-8344-415C-B3D2-D684B844F184@alind.io> Ah, thanks! That?s good to know. :-) Maybe I missed it but I can?t find this documented anywhere (in e.g. erlang:link/1 or erlang:monitor/2). The only place I can find it referenced is in an example in the Getting Started User?s Guide: http://erlang.org/doc/getting_started/robustness.html Perhaps it should be documented more prominently? Next question is I need to clarify is: can gen_server processes never receive exit messages as normal info messages? If I enable trap_exit I only receive a call to terminate with the noconnection error eventually... Cheers, Adam > On 4. Dec 2019, at 09:54, Adam Lindberg wrote: > > I have indeed linked processes. I realized that that is why the exception is ?uncatchable? in the shell perhaps. Because the shell process dies because it is linked to my test processes, and the function running the test hasn?t encountered an error yet. > > Does links in Erlang always crash with {'EXIT', Pid, noconnection} when a node dies? > > Cheers, > Adam > >> On 4. Dec 2019, at 08:38, Lukas Larsson wrote: >> >> >> >> On Tue, Dec 3, 2019 at 11:56 AM Adam Lindberg wrote: >> Hi! >> >> I?m running some tests using distributed Erlang. I set up a cluster of Erlang nodes doing Distributed Systems? stuff, and a hidden node that have a connection to each of the nodes in that cluster. The hidden node orchestrates the test by starting all Erlang nodes as ports. It then starts a process (gen_server) on each node that manipulates stuff on that node. It also loads some mock modules among other things. The hidden node also has some managing gen_servers running locally, which some of the mocks makes RPC calls to from the cluster nodes (to simulate and orchestrate mocked hardware components). >> >> Now I wanted to test how my system behaves when killing some random nodes, chaos monkey style. So I picked the easiest option of using rpc:cast(RandomClusterNode, erlang, halt, [137]). However, now my test dies with the following obscure error: ** exception exit: noconnection. This even happens when first spawning a fun that then calls erlang:halt(137) (as to avoid the RPC connection somehow breaking). >> >> After searching a bit on the Internet it seems to be some internal uncatchable (!) error generated by Erlang [1][2], but it is not at all clear when it happens, and how to avoid it. After some debugging in the gen_servers running on the hidden node, I can see the error by setting process_flag(trap_exit, true) and printing it in terminate/2 but I still can?t catch it. I can?t even catch it in the shell by enclosing my run in a try-catch block! It?s almost not mentioned at all in the official documentation [3]. Most likely I?m setting up my test nodes and the application/test code in a way that generates this error, but I have no idea what exactly leads to it. >> >> I guess I have two problems: >> >> 1. What is the error, and how can I handle / avoid it? >> >> I'm not sure, but could it be that your process is linked to a process on the remote side? That what you are getting is a broken link error? >> >> 2. Why is it not documented? >> >> Cheers, >> Adam >> >> >> [1]: http://erlang.org/pipermail/erlang-questions/2012-April/066219.html >> [2]: http://erlang.org/pipermail/erlang-questions/2013-April/073246.html >> [3]: http://erlang.org/doc/getting_started/robustness.html >> > From lukas@REDACTED Wed Dec 4 11:31:38 2019 From: lukas@REDACTED (Lukas Larsson) Date: Wed, 4 Dec 2019 11:31:38 +0100 Subject: ** exception exit: noconnection In-Reply-To: <266CA31E-8344-415C-B3D2-D684B844F184@alind.io> References: <266CA31E-8344-415C-B3D2-D684B844F184@alind.io> Message-ID: On Wed, Dec 4, 2019 at 10:59 AM Adam Lindberg wrote: > Ah, thanks! That?s good to know. :-) > > Maybe I missed it but I can?t find this documented anywhere (in e.g. > erlang:link/1 or erlang:monitor/2). The only place I can find it referenced > is in an example in the Getting Started User?s Guide: > http://erlang.org/doc/getting_started/robustness.html > > It is mentioned under the Info section in the erlang:monitor/2 documentation. > Perhaps it should be documented more prominently? > Yes it should, just as noproc is. > Next question is I need to clarify is: can gen_server processes never > receive exit messages as normal info messages? If I enable trap_exit I only > receive a call to terminate with the noconnection error eventually... > I'm not sure I understand what you mean. When trapping exits, a gen_server will either get the exit in the terminate or handle_info callback. Which one depends on which process sends the exit signal. If it is the "parent" process, i.e. the process that started the gen_server, then the terminate callback will be called. If it some other process it is the handle_info callback that is called. The assumption here is that if the parent exits for any reason, you want to terminate your gen_server, but if a child or peer exits, then you want to handle that and possibly continue running. > > Cheers, > Adam > > > On 4. Dec 2019, at 09:54, Adam Lindberg wrote: > > > > I have indeed linked processes. I realized that that is why the > exception is ?uncatchable? in the shell perhaps. Because the shell process > dies because it is linked to my test processes, and the function running > the test hasn?t encountered an error yet. > > > > Does links in Erlang always crash with {'EXIT', Pid, noconnection} when > a node dies? > > > > Cheers, > > Adam > > > >> On 4. Dec 2019, at 08:38, Lukas Larsson wrote: > >> > >> > >> > >> On Tue, Dec 3, 2019 at 11:56 AM Adam Lindberg wrote: > >> Hi! > >> > >> I?m running some tests using distributed Erlang. I set up a cluster of > Erlang nodes doing Distributed Systems? stuff, and a hidden node that have > a connection to each of the nodes in that cluster. The hidden node > orchestrates the test by starting all Erlang nodes as ports. It then starts > a process (gen_server) on each node that manipulates stuff on that node. It > also loads some mock modules among other things. The hidden node also has > some managing gen_servers running locally, which some of the mocks makes > RPC calls to from the cluster nodes (to simulate and orchestrate mocked > hardware components). > >> > >> Now I wanted to test how my system behaves when killing some random > nodes, chaos monkey style. So I picked the easiest option of using > rpc:cast(RandomClusterNode, erlang, halt, [137]). However, now my test dies > with the following obscure error: ** exception exit: noconnection. This > even happens when first spawning a fun that then calls erlang:halt(137) (as > to avoid the RPC connection somehow breaking). > >> > >> After searching a bit on the Internet it seems to be some internal > uncatchable (!) error generated by Erlang [1][2], but it is not at all > clear when it happens, and how to avoid it. After some debugging in the > gen_servers running on the hidden node, I can see the error by setting > process_flag(trap_exit, true) and printing it in terminate/2 but I still > can?t catch it. I can?t even catch it in the shell by enclosing my run in a > try-catch block! It?s almost not mentioned at all in the official > documentation [3]. Most likely I?m setting up my test nodes and the > application/test code in a way that generates this error, but I have no > idea what exactly leads to it. > >> > >> I guess I have two problems: > >> > >> 1. What is the error, and how can I handle / avoid it? > >> > >> I'm not sure, but could it be that your process is linked to a process > on the remote side? That what you are getting is a broken link error? > >> > >> 2. Why is it not documented? > >> > >> Cheers, > >> Adam > >> > >> > >> [1]: > http://erlang.org/pipermail/erlang-questions/2012-April/066219.html > >> [2]: > http://erlang.org/pipermail/erlang-questions/2013-April/073246.html > >> [3]: http://erlang.org/doc/getting_started/robustness.html > >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hello@REDACTED Wed Dec 4 12:17:39 2019 From: hello@REDACTED (Adam Lindberg) Date: Wed, 4 Dec 2019 12:17:39 +0100 Subject: ** exception exit: noconnection In-Reply-To: References: <266CA31E-8344-415C-B3D2-D684B844F184@alind.io> Message-ID: <72738C19-D1B1-44D6-B7F6-83B8E0CC6776@alind.io> On 4. Dec 2019, at 11:31, Lukas Larsson wrote: > > > > On Wed, Dec 4, 2019 at 10:59 AM Adam Lindberg wrote: > Ah, thanks! That?s good to know. :-) > > Maybe I missed it but I can?t find this documented anywhere (in e.g. erlang:link/1 or erlang:monitor/2). The only place I can find it referenced is in an example in the Getting Started User?s Guide: http://erlang.org/doc/getting_started/robustness.html > > > It is mentioned under the Info section in the erlang:monitor/2 documentation. Interesting, didn?t show up very early in my search results. Thanks for the pointers. > > Perhaps it should be documented more prominently? > > Yes it should, just as noproc is. That would be great. I?ll prepare a PR. > > Next question is I need to clarify is: can gen_server processes never receive exit messages as normal info messages? If I enable trap_exit I only receive a call to terminate with the noconnection error eventually... > > I'm not sure I understand what you mean. > > When trapping exits, a gen_server will either get the exit in the terminate or handle_info callback. Which one depends on which process sends the exit signal. If it is the "parent" process, i.e. the process that started the gen_server, then the terminate callback will be called. If it some other process it is the handle_info callback that is called. The assumption here is that if the parent exits for any reason, you want to terminate your gen_server, but if a child or peer exits, then you want to handle that and possibly continue running. Yeah, that it is coming from the parent is most likely my case. I think I painted myself into a very obscure corner here. I start some linked, unsupervised gen_server processes from a shell function, then run the tests with the the help of those. Once the test process on the system under test dies with 'noconnection? it arrives at the shell process, which is the parent to the test processes. One thing that I think could be improved is the error printout in the shell: (test@REDACTED)1> my_test:start(). Running... ** exception exit: noconnection (test@REDACTED)2> Contrast with: (test@REDACTED)3> exit(foo). ** exception exit: foo In the first case, it is actually not the function that raises the exception, but the shell process that receives an exit signal. It would be nice if there was a visual difference here. The intuitive thing to to is to run "catch my_test:start()? which obviously does nothing since it is not the function that crashes, it is a linked process started by the function that sends an exit signal to the running shell process. Perhaps something along the lines of: (test@REDACTED)1> my_test:start(). Running... ** shell process received exit signal: noconnection (test@REDACTED)2> Cheers, Adam > > Cheers, > Adam > > > On 4. Dec 2019, at 09:54, Adam Lindberg wrote: > > > > I have indeed linked processes. I realized that that is why the exception is ?uncatchable? in the shell perhaps. Because the shell process dies because it is linked to my test processes, and the function running the test hasn?t encountered an error yet. > > > > Does links in Erlang always crash with {'EXIT', Pid, noconnection} when a node dies? > > > > Cheers, > > Adam > > > >> On 4. Dec 2019, at 08:38, Lukas Larsson wrote: > >> > >> > >> > >> On Tue, Dec 3, 2019 at 11:56 AM Adam Lindberg wrote: > >> Hi! > >> > >> I?m running some tests using distributed Erlang. I set up a cluster of Erlang nodes doing Distributed Systems? stuff, and a hidden node that have a connection to each of the nodes in that cluster. The hidden node orchestrates the test by starting all Erlang nodes as ports. It then starts a process (gen_server) on each node that manipulates stuff on that node. It also loads some mock modules among other things. The hidden node also has some managing gen_servers running locally, which some of the mocks makes RPC calls to from the cluster nodes (to simulate and orchestrate mocked hardware components). > >> > >> Now I wanted to test how my system behaves when killing some random nodes, chaos monkey style. So I picked the easiest option of using rpc:cast(RandomClusterNode, erlang, halt, [137]). However, now my test dies with the following obscure error: ** exception exit: noconnection. This even happens when first spawning a fun that then calls erlang:halt(137) (as to avoid the RPC connection somehow breaking). > >> > >> After searching a bit on the Internet it seems to be some internal uncatchable (!) error generated by Erlang [1][2], but it is not at all clear when it happens, and how to avoid it. After some debugging in the gen_servers running on the hidden node, I can see the error by setting process_flag(trap_exit, true) and printing it in terminate/2 but I still can?t catch it. I can?t even catch it in the shell by enclosing my run in a try-catch block! It?s almost not mentioned at all in the official documentation [3]. Most likely I?m setting up my test nodes and the application/test code in a way that generates this error, but I have no idea what exactly leads to it. > >> > >> I guess I have two problems: > >> > >> 1. What is the error, and how can I handle / avoid it? > >> > >> I'm not sure, but could it be that your process is linked to a process on the remote side? That what you are getting is a broken link error? > >> > >> 2. Why is it not documented? > >> > >> Cheers, > >> Adam > >> > >> > >> [1]: http://erlang.org/pipermail/erlang-questions/2012-April/066219.html > >> [2]: http://erlang.org/pipermail/erlang-questions/2013-April/073246.html > >> [3]: http://erlang.org/doc/getting_started/robustness.html > >> > > > From roger@REDACTED Wed Dec 4 15:37:22 2019 From: roger@REDACTED (Roger Lipscombe) Date: Wed, 4 Dec 2019 14:37:22 +0000 Subject: ** exception exit: noconnection In-Reply-To: <72738C19-D1B1-44D6-B7F6-83B8E0CC6776@alind.io> References: <266CA31E-8344-415C-B3D2-D684B844F184@alind.io> <72738C19-D1B1-44D6-B7F6-83B8E0CC6776@alind.io> Message-ID: On Wed, 4 Dec 2019 at 11:17, Adam Lindberg wrote: > Perhaps something along the lines of: > > (test@REDACTED)1> my_test:start(). > Running... > ** shell process received exit signal: noconnection > (test@REDACTED)2> Except: the shell process isn't receiving an exit signal. It's being killed. You can see that if you examine self() before and after that message -- the REPL restarts the shell process. To make this more obvious, I have a custom prompt which displays the shell's pid: https://github.com/rlipscombe/rl_erl_prompt [1]. But even with that said, the message could be improved, certainly. [1]: I note in passing that there's actually attempted support for colour in there. I could never get it working. From stefano.bertuola@REDACTED Wed Dec 4 14:03:50 2019 From: stefano.bertuola@REDACTED (Stefano Bertuola) Date: Wed, 4 Dec 2019 14:03:50 +0100 Subject: Dynamic Configuration Database in Erlang Message-ID: Hi all. I am looking for understanding how to implement a configuration database in Erlang (which allows dynamic configuration). For example, in a Fred's blog ( https://ferd.ca/erlang-otp-21-s-new-logger.html) related to Logger, he mentions about: "*[a] configuration database is an opaque set of OTP processes that hold all the data in an ETS table so it's fast of access (something lager did as well), but that could very well be written using the OTP Persistent Term Storage [...]".* Does anyone have more details about this or any other implementation? Br. Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From t@REDACTED Wed Dec 4 15:54:00 2019 From: t@REDACTED (Tristan Sloughter) Date: Wed, 04 Dec 2019 07:54:00 -0700 Subject: exception exit: timeout in gen_server:call In-Reply-To: References: Message-ID: <7f32a576-98d5-4d65-86a8-32432c4a9892@www.fastmail.com> For metrics and tracing the pgo (https://github.com/erleans/pgo) library is a Postgres lib with built in pool that is currently instrumented with telemetry (https://github.com/beam-telemetry/telemetry) for metrics and can optionally create spans with opencensus (https://github.com/census-instrumentation/opencensus-erlang). Hm, looks like I haven't yet added the `queue_time`to the metrics, I'll try to remember to do that soon when I replace OpenCensus with OpenTelemetry (https://github.com/open-telemetry/opentelemetry-erlang) https://opentelemetry.io/. At the same time I'll be expanding the available metrics to include information about the pool. So only the query time is sent in the telemetry event at this time. Tristan On Wed, Dec 4, 2019, at 01:33, Roger Lipscombe wrote: > On Tue, 3 Dec 2019 at 19:32, Jesper Louis Andersen > wrote: > > * Measure the maximal latency over the equery call you've made for a 10-15 second period. Plot it. > > We wrap our database calls (and pool checkouts) in > folsom_metrics:histogram_timed_update, which reports percentiles to > graphite. Since folsom is (afaict) abandoned, you could see if (e.g.) > exometer has something similar. We also report connections-in-use, > etc., which is helpful. > > > * Pipeline the queries. A query which waits for an answer affects every sibling query as well. > > Other pooling models are available. Pull, rather than push, might help > -- because it could pull N pending requests and pipeline them. Can't > find the relevant links right now, though. > > > > > > Down the line: > > > > * Postgres can log slow queries. Turn that on. > > * Postgres can log whenever it holds a lock for more than a certain time window. Turn that on. > > > > Narrow down where the problem can occur by having systems provide facts to you. Don't go for "what is wrong?" Go for "What would I like to know?". This helps observability (In the Control Theory / Charity Majors sense). > > > From hello@REDACTED Wed Dec 4 16:06:07 2019 From: hello@REDACTED (Adam Lindberg) Date: Wed, 4 Dec 2019 16:06:07 +0100 Subject: ** exception exit: noconnection In-Reply-To: References: <266CA31E-8344-415C-B3D2-D684B844F184@alind.io> <72738C19-D1B1-44D6-B7F6-83B8E0CC6776@alind.io> Message-ID: <570EAF20-51FE-4EE8-AEED-64722C10EB0F@alind.io> It does receive an exit signal, and then dies because of it, no? To further split hairs: it?s not killed by anyone specifically (i.e. exit(ShellProcess, kill)), it dies just like any other Erlang process because it receives an exit signal from a linked process. I?m pretty sure there is _some_ process _somewhere_ that also catches the error and makes sure the printout is done. I don?t know if it is the current shell process or some higher level manager process though (as I didn?t read the source code). And to go ever further down the rabbit hole: there is no such thing as ?killing? an Erlang process. You can only send exit signals. It?s just that there is a special exit signal (?kill?) that is uncatchable and where the VM makes sure the process dies. Cheers, Adam > On 4. Dec 2019, at 15:37, Roger Lipscombe wrote: > > On Wed, 4 Dec 2019 at 11:17, Adam Lindberg wrote: >> Perhaps something along the lines of: >> >> (test@REDACTED)1> my_test:start(). >> Running... >> ** shell process received exit signal: noconnection >> (test@REDACTED)2> > > Except: the shell process isn't receiving an exit signal. It's being > killed. You can see that if you examine self() before and after that > message -- the REPL restarts the shell process. > > To make this more obvious, I have a custom prompt which displays the > shell's pid: https://github.com/rlipscombe/rl_erl_prompt [1]. > > But even with that said, the message could be improved, certainly. > > [1]: I note in passing that there's actually attempted support for > colour in there. I could never get it working. From zxq9@REDACTED Wed Dec 4 16:08:52 2019 From: zxq9@REDACTED (zxq9) Date: Thu, 5 Dec 2019 00:08:52 +0900 Subject: Dynamic Configuration Database in Erlang In-Reply-To: References: Message-ID: <0d62fc71-1daf-e6e4-b3da-04407cf65e95@zxq9.com> On 2019/12/04 22:03, Stefano Bertuola wrote: > I am looking for understanding how to implement a configuration?database > in Erlang (which allows dynamic?configuration). > > Does anyone have more details about?this or any other implementation? Hi, Stefano. I make a configuration manager/DB in just about any program I write that needs configuration. Usually this takes the form of a single gen_server that is started first at the beginning of execution (you can have it start later, but usually your system configuration is pretty close to the very top of your supervision tree). You can implement a relatively naked "global dictionary" if you haven't figured out what sort of configuration data your program needs. This would be a simple process that exposes two functions: -spec config(Key :: atom()) -> Value :: term(). -spec config(Key :: atom(), Value :: term()) -> ok. This is as naive as things can possibly get and is really just a wrapper over either a front-end for an ETS table or a gen_server that simply keeps a config map of #{Key => Value} in its state. (I often avoid ETS unless it becomes an actual *need* in a program, which is somewhat rare. As long as you hide the act of accessing the config data behind functions it doesn't matter which approach you take to start out with because you can change things later -- just don't get into the habit of scattering naked reads from ETS tables throughout your code!) While this is naive it is not a *bad* way to start playing with the idea of a config manager within your programs (and even this naive approach already gives you the magic ability to alter global settings on the fly), but a config manager can be much more useful. In client-side programs you often have a large amount of evolving state that the user expects to persist between executions, or settings they can change globally across the program (for example, changing a l10n option or adding/removing a plugin in naive programs very often requires restarting the program -- that's unsatisfactory in many cases). In complex server-side programs there may be a huge amount of configuration data about who to contact, what connections must be established and so on. In those cases "config" doesn't just mean "read the config file" (though it usually involves that), but also often means "try to acquire external resources that may require discovery first". In this case the configuration manager process is where you can codify the discovery logic, default settings, config failure criteria (to cause a clean exit with appropriate logs/warnings/alerts sent), and other config initialization or safe/unsafe update checks. It can also be used as a place to register processes that need to be notified when config update events occur (imagine having a function where a process can register for config updates "send me a notification if Key/Value is changed", for example). The more complex a program's configuration state becomes the more often you'll find yourself writing specific functions to handle specific config keys and states (or specific clauses of an exposed config/2 function that match on specific keys) because config changes have a way over time of interacting with one another and requiring sanity checks that are better to formalize within a config manager module than scatter around in ad-hoc code. I'm explaining this in prose instead of providing an example, but hopefully this gives you some ideas. Write a config manager and play around with the idea. Having a config manager process at the top of your supervision tree (and offloading it from ad-hoc config management throughout the rest of the code) can sometimes make a lot of other code simpler to test, debug and comprehend. -Craig From zxq9@REDACTED Wed Dec 4 16:19:05 2019 From: zxq9@REDACTED (zxq9) Date: Thu, 5 Dec 2019 00:19:05 +0900 Subject: Dynamic Configuration Database in Erlang In-Reply-To: <0d62fc71-1daf-e6e4-b3da-04407cf65e95@zxq9.com> References: <0d62fc71-1daf-e6e4-b3da-04407cf65e95@zxq9.com> Message-ID: <362fef0c-1dfb-38c5-d76a-207ae757b96b@zxq9.com> On 2019/12/05 0:08, zxq9 wrote: > On 2019/12/04 22:03, Stefano Bertuola wrote: >> I am looking for understanding how to implement a >> configuration?database in Erlang (which allows dynamic?configuration). >> >> Does anyone have more details about?this or any other implementation? A quick note on this bit here: > You can implement a relatively naked "global dictionary" if you haven't > figured out what sort of configuration data your program needs. This > would be a simple process that exposes two functions: > > -spec config(Key :: atom()) -> Value :: term(). > -spec config(Key :: atom(), Value :: term()) -> ok. Really the spec should be: -spec config(Key :: atom()) -> {ok, Value :: term()} | undefined. -spec config(Key :: atom(), Value :: term()) -> ok. The alternative is to crash the system if someone requests an undefined value (or to return the atom 'undefined' and messily match on it all the time -- which is workable until the setting value you actually *intend* happens to itself be 'undefined'!). You could also use 'false' in place of 'undefined' if you happen to need some listy abstractions that operate over config data, of course, but anyway, I think you get the idea. I'm a big fan of {ok, Value} | {error, Reason} type return values because they provide more options for the author of the calling code, to include direct assertion at the place they call it: % Only crash the calling process if Key doesn't exist {ok, Setting} = conf_man:config(Key), % ... etc. I just noticed this in retrospect. Small detail, but can have enough of an impact on calling code that it is worth mentioning. -Craig From roger@REDACTED Wed Dec 4 16:41:50 2019 From: roger@REDACTED (Roger Lipscombe) Date: Wed, 4 Dec 2019 15:41:50 +0000 Subject: ** exception exit: noconnection In-Reply-To: <570EAF20-51FE-4EE8-AEED-64722C10EB0F@alind.io> References: <266CA31E-8344-415C-B3D2-D684B844F184@alind.io> <72738C19-D1B1-44D6-B7F6-83B8E0CC6776@alind.io> <570EAF20-51FE-4EE8-AEED-64722C10EB0F@alind.io> Message-ID: On Wed, 4 Dec 2019 at 15:06, Adam Lindberg wrote: > It does receive an exit signal, and then dies because of it, no? Sorry. Lack of precision: it's not *trapping* exit signals by default, so it gets default-killed. The process that owns the shell process traps *that*, but can't tell the difference, so emitting a different message for the two cases might not be that simple. From roger@REDACTED Wed Dec 4 16:45:33 2019 From: roger@REDACTED (Roger Lipscombe) Date: Wed, 4 Dec 2019 15:45:33 +0000 Subject: Dynamic Configuration Database in Erlang In-Reply-To: <0d62fc71-1daf-e6e4-b3da-04407cf65e95@zxq9.com> References: <0d62fc71-1daf-e6e4-b3da-04407cf65e95@zxq9.com> Message-ID: On Wed, 4 Dec 2019 at 15:10, zxq9 wrote: > > On 2019/12/04 22:03, Stefano Bertuola wrote: > > I am looking for understanding how to implement a configuration database > > in Erlang (which allows dynamic configuration). > > > > Does anyone have more details about this or any other implementation? > > Hi, Stefano. > > I make a configuration manager/DB in just about any program I write that > needs configuration. Usually this takes the form of a single gen_server > that is started first at the beginning of execution (you can have it > start later, but usually your system configuration is pretty close to > the very top of your supervision tree). Counterpoint: if you're dealing with a larger, mixed, system that has Erlang and non-Erlang components, you probably want to look at external configuration databases, such as etcd or consul. They'll deal with service discovery, health-checking, and all that other complicated stuff. Or you might want a mixture of the two approaches. From hello@REDACTED Wed Dec 4 17:41:45 2019 From: hello@REDACTED (Adam Lindberg) Date: Wed, 4 Dec 2019 17:41:45 +0100 Subject: ** exception exit: noconnection In-Reply-To: References: <266CA31E-8344-415C-B3D2-D684B844F184@alind.io> <72738C19-D1B1-44D6-B7F6-83B8E0CC6776@alind.io> <570EAF20-51FE-4EE8-AEED-64722C10EB0F@alind.io> Message-ID: Got it. I got mostly tripped up on not being able to wrap the code in a try-catch and be able to catch it. Therefore I think it would make sense to print something else when the shell actually receives an exit signal so users can understand that they are special somehow. Cheers, Adam > On 4. Dec 2019, at 16:41, Roger Lipscombe wrote: > > On Wed, 4 Dec 2019 at 15:06, Adam Lindberg wrote: >> It does receive an exit signal, and then dies because of it, no? > > Sorry. Lack of precision: it's not *trapping* exit signals by default, > so it gets default-killed. The process that owns the shell process > traps *that*, but can't tell the difference, so emitting a different > message for the two cases might not be that simple. From roger@REDACTED Wed Dec 4 19:20:34 2019 From: roger@REDACTED (Roger Lipscombe) Date: Wed, 4 Dec 2019 18:20:34 +0000 Subject: Telemetry Examples? Message-ID: Our current scenario is this: folsom -> folsomite -> graphite -> grafana folsom looks to be abandoned, so we're looking at alternatives. Top of the list currently is probably exometer. Telemetry (https://github.com/beam-telemetry/telemetry) *might* be interesting, but I can't find any clear examples of how I'd use it. What's the Erlang/Telemetry equivalent to the folsomite->graphite step? We're also looking at Prometheus. Is there a plugin for that? From cean.ebengt@REDACTED Wed Dec 4 21:04:57 2019 From: cean.ebengt@REDACTED (bengt) Date: Wed, 4 Dec 2019 21:04:57 +0100 Subject: Telemetry Examples? In-Reply-To: References: Message-ID: <3ACE4BEB-5C96-4946-92D3-B4E7990D9B27@gmail.com> Greetings, For Prometheus there is https://github.com/deadtrickster/prometheus.erl Best Wishes, bengt > On 4 Dec 2019, at 19:20, Roger Lipscombe wrote: > > Our current scenario is this: > > folsom -> folsomite -> graphite -> grafana > > folsom looks to be abandoned, so we're looking at alternatives. Top of > the list currently is probably exometer. > > Telemetry (https://github.com/beam-telemetry/telemetry) *might* be > interesting, but I can't find any clear examples of how I'd use it. > > What's the Erlang/Telemetry equivalent to the folsomite->graphite > step? We're also looking at Prometheus. Is there a plugin for that? From t@REDACTED Wed Dec 4 21:22:17 2019 From: t@REDACTED (Tristan Sloughter) Date: Wed, 04 Dec 2019 13:22:17 -0700 Subject: Telemetry Examples? In-Reply-To: <3ACE4BEB-5C96-4946-92D3-B4E7990D9B27@gmail.com> References: <3ACE4BEB-5C96-4946-92D3-B4E7990D9B27@gmail.com> Message-ID: <862c1774-0fed-4f69-acfe-23965382f506@www.fastmail.com> A year ago I would have told you OpenCensus. But OpenCensus has merged with OpenTracing into the new project OpenTelemetry and the metrics part of OpenTelemetry has been in flux, plus no one has had time to work on the Erlang implementation of it yet. Because of this I would also suggest https://github.com/deadtrickster/prometheus.erl -- tho opencensus does still work and has some interesting features https://opencensus.io/quickstart/erlang/metrics/ Eventually I'll be suggesting OpenTelemetry, which can then export to many different tools like prometheus, influxdb, datadog, etc. @hauleth put together an "awesome beam observability" list for the Foundation's Observability Working Group repo https://github.com/erlef/eef-observability-wg/blob/master/README.md#awesome-beam-observability that may give you some ideas as well. Telemetry (https://github.com/beam-telemetry/telemetry) is an abstraction above something like prometheus.erl. It will likely eventually be easily combined with OpenTelemetry. For now you could create handlers for it that called into prometheus.erl. Tristan On Wed, Dec 4, 2019, at 13:04, bengt wrote: > Greetings, > > For Prometheus there is https://github.com/deadtrickster/prometheus.erl > > Best Wishes, > bengt > > > On 4 Dec 2019, at 19:20, Roger Lipscombe wrote: > > > > Our current scenario is this: > > > > folsom -> folsomite -> graphite -> grafana > > > > folsom looks to be abandoned, so we're looking at alternatives. Top of > > the list currently is probably exometer. > > > > Telemetry (https://github.com/beam-telemetry/telemetry) *might* be > > interesting, but I can't find any clear examples of how I'd use it. > > > > What's the Erlang/Telemetry equivalent to the folsomite->graphite > > step? We're also looking at Prometheus. Is there a plugin for that? > > From mailparmalat@REDACTED Thu Dec 5 17:39:06 2019 From: mailparmalat@REDACTED (Steven) Date: Thu, 5 Dec 2019 18:39:06 +0200 Subject: gen_udp:send/4 Message-ID: Hi, We have a situation where gen_udp:send/4 does not return ok and it just hangs and the process is waiting indefinitely. e.g. {current_function,{prim_inet,sendto,4}}, {initial_call,{proc_lib,init_p,5}}, {status,running}, {message_queue_len,15363062}, The send buffer size on the socket is around 200KB. 3> inet:getopts(S, [sndbuf]). {ok,[{sndbuf,212992}]} The UDP socket doesn't receive any incoming packets so it is used for sending only. Running R21-3 with redhat 7.5. Would like to ask the group under which conditions will the port not come back with inet_reply? I assume if the sndbuf is full then it should come back with enobufs or some sort but it should be quick enough to flush the sndbuf to interface. In prim_inet:sendTo/4, it always expect inet reply from OS but it never comes and never times out. try erlang:port_command(S, PortCommandData) of true -> receive {inet_reply,S,Reply} -> ?DBG_FORMAT( "prim_inet:sendto() -> ~p~n", [Reply]), Reply end catch Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From benmmurphy@REDACTED Thu Dec 5 19:26:54 2019 From: benmmurphy@REDACTED (Ben Murphy) Date: Thu, 5 Dec 2019 18:26:54 +0000 Subject: gen_udp:send/4 In-Reply-To: References: Message-ID: There used to be a bug with using the inet functions and having a large message queue. Because the receive couldn?t use the selective receive optimisation it would have to scan the whole of the message queue. It looks like you have a large message queue maybe this bug still exists and effects you. Sent from my iPhone > On 5 Dec 2019, at 16:39, Steven wrote: > > ? > Hi, > > We have a situation where gen_udp:send/4 does not return ok and it just hangs and the process is waiting indefinitely. > > e.g. > {current_function,{prim_inet,sendto,4}}, > {initial_call,{proc_lib,init_p,5}}, > {status,running}, > {message_queue_len,15363062}, > > The send buffer size on the socket is around 200KB. > > 3> inet:getopts(S, [sndbuf]). > {ok,[{sndbuf,212992}]} > > The UDP socket doesn't receive any incoming packets so it is used for sending only. Running R21-3 with redhat 7.5. Would like to ask the group under which conditions will the port not come back with inet_reply? I assume if the sndbuf is full then it should come back with enobufs or some sort but it should be quick enough to flush the sndbuf to interface. > > In prim_inet:sendTo/4, it always expect inet reply from OS but it never comes and never times out. > > try erlang:port_command(S, PortCommandData) of > true -> > receive > {inet_reply,S,Reply} -> > ?DBG_FORMAT( > "prim_inet:sendto() -> ~p~n", [Reply]), > Reply > end > catch > > Thanks From v@REDACTED Thu Dec 5 20:32:49 2019 From: v@REDACTED (Valentin Micic) Date: Thu, 5 Dec 2019 21:32:49 +0200 Subject: gen_udp:send/4 In-Reply-To: References: Message-ID: > On 05 Dec 2019, at 18:39, Steven wrote: > > Hi, > > We have a situation where gen_udp:send/4 does not return ok and it just hangs and the process is waiting indefinitely. > > e.g. > {current_function,{prim_inet,sendto,4}}, > {initial_call,{proc_lib,init_p,5}}, > {status,running}, > {message_queue_len,15363062}, It appears that your server (e.g. process that is calling gen_udp:send/4) has way to many messages on its process queue ? thus gen_udp:send/4 hanging is probably a symptom and not necessarily the cause. Could it be that your server itself employs a selective receive for the incoming events/requests processing, e.g. main_loop() -> receive {?Some pattern?, _} -> gen_udp:send(?); ?Some other pattern? -> do_something else(?) _ -> ignore; after SomeTime -> do_soemthing_different() end, main_loop() . The construct above will slow message processing and that may account for more than 15,000,000 messages reported to be on your server's message queue. In my view, selective receive is a bad idea for server-side implementation. Server should rather use something like this: main_loop() -> receive Msg -> process_msg( Msg ) after SomeTime -> do_something_different() end, main_loop() . process_msg( {?Some pattern?, _} ) -> gen_udp:send(?); process_msg( ?Some other pattern? ) -> do_something_else(?); process_msg( _ ) -> ignore. This will always keep server?s process queue reasonably empty, thus, even when server uses a function that hides a selective receive, such function will come back reasonably quickly. Kind regards V/ > The send buffer size on the socket is around 200KB. > > 3> inet:getopts(S, [sndbuf]). > {ok,[{sndbuf,212992}]} > > The UDP socket doesn't receive any incoming packets so it is used for sending only. Running R21-3 with redhat 7.5. Would like to ask the group under which conditions will the port not come back with inet_reply? I assume if the sndbuf is full then it should come back with enobufs or some sort but it should be quick enough to flush the sndbuf to interface. > > In prim_inet:sendTo/4, it always expect inet reply from OS but it never comes and never times out. > > try erlang:port_command(S, PortCommandData) of > true -> > receive > {inet_reply,S,Reply} -> > ?DBG_FORMAT( > "prim_inet:sendto() -> ~p~n", [Reply]), > Reply > end > catch > > Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailparmalat@REDACTED Fri Dec 6 07:15:39 2019 From: mailparmalat@REDACTED (Steven) Date: Fri, 6 Dec 2019 08:15:39 +0200 Subject: gen_udp:send/4 In-Reply-To: References: Message-ID: Thanks, Seems like under CPU load and with a large volume of messages coming into the process, the selective receive will slow it down and make it look like the udp send is taking long or not acknowledged meanwhile the acknowledgement is somewhere in the message queue. Steven On Thu, Dec 5, 2019 at 9:32 PM Valentin Micic wrote: > > On 05 Dec 2019, at 18:39, Steven wrote: > > Hi, > > We have a situation where gen_udp:send/4 does not return ok and it just > hangs and the process is waiting indefinitely. > > e.g. > {current_function,{prim_inet,sendto,4}}, > {initial_call,{proc_lib,init_p,5}}, > {status,running}, > {message_queue_len,*15363062*}, > > > It appears that your server (e.g. process that is calling gen_udp:send/4) > has way to many messages on its process queue ? thus gen_udp:send/4 hanging > is probably a symptom and not necessarily the cause. > > Could it be that your server itself employs a selective receive for the > incoming events/requests processing, e.g. > > main_loop() > -> > receive > {?Some pattern?, _} -> gen_udp:send(?); > ?Some other pattern? -> do_something else(?) > _ -> ignore; > after SomeTime -> do_soemthing_different() > end, > main_loop() > . > > The construct above will slow message processing and that may account for > more than 15,000,000 messages reported to be on your server's message queue. > In my view, selective receive is a bad idea for server-side > implementation. Server should rather use something like this: > > main_loop() > -> > receive > Msg -> process_msg( Msg ) > after SomeTime -> do_something_different() > end, > main_loop() > . > > process_msg( {?Some pattern?, _} ) -> gen_udp:send(?); > process_msg( ?Some other pattern? ) -> do_something_else(?); > process_msg( _ ) -> ignore. > > This will always keep server?s process queue reasonably empty, thus, even > when server uses a function that hides a selective receive, such function > will come back reasonably quickly. > Kind regards > > V/ > > > The send buffer size on the socket is around 200KB. > > 3> inet:getopts(S, [sndbuf]). > {ok,[{sndbuf,212992}]} > > The UDP socket doesn't receive any incoming packets so it is used for > sending only. Running R21-3 with redhat 7.5. Would like to ask the group > under which conditions will the port not come back with inet_reply? I > assume if the sndbuf is full then it should come back with enobufs or > some sort but it should be quick enough to flush the sndbuf to interface. > > In prim_inet:sendTo/4, it always expect inet reply from OS but it never > comes and never times out. > > try erlang:port_command(S, PortCommandData) of > true -> > receive > {inet_reply,S,Reply} -> > ?DBG_FORMAT( > "prim_inet:sendto() -> ~p~n", > [Reply]), > Reply > end > catch > > Thanks > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikpelinux@REDACTED Fri Dec 6 21:00:31 2019 From: mikpelinux@REDACTED (Mikael Pettersson) Date: Fri, 6 Dec 2019 21:00:31 +0100 Subject: gen_udp:send/4 In-Reply-To: References: Message-ID: On Thu, Dec 5, 2019 at 5:39 PM Steven wrote: > > Hi, > > We have a situation where gen_udp:send/4 does not return ok and it just hangs and the process is waiting indefinitely. > > e.g. > {current_function,{prim_inet,sendto,4}}, > {initial_call,{proc_lib,init_p,5}}, > {status,running}, > {message_queue_len,15363062}, > > The send buffer size on the socket is around 200KB. > > 3> inet:getopts(S, [sndbuf]). > {ok,[{sndbuf,212992}]} > > The UDP socket doesn't receive any incoming packets so it is used for sending only. Running R21-3 with redhat 7.5. Would like to ask the group under which conditions will the port not come back with inet_reply? I assume if the sndbuf is full then it should come back with enobufs or some sort but it should be quick enough to flush the sndbuf to interface. > > In prim_inet:sendTo/4, it always expect inet reply from OS but it never comes and never times out. > > try erlang:port_command(S, PortCommandData) of > true -> > receive > {inet_reply,S,Reply} -> > ?DBG_FORMAT( > "prim_inet:sendto() -> ~p~n", [Reply]), > Reply > end > catch > > Thanks I believe a (or the) problem is that the selective-receive in prim_inet:sendto/4 doesn't use the ref trick, therefore it has to scan the entire 15+ million entry message queue of the sender process looking for that inet_reply message. That is going to be very slow. (Furthermore there are other performance penalties associated with having very long message queues, see erlang:process_flag(message_queue_data, off_heap) for one possible remedy.) It would be nice if the return signalling from calling port_command could be fixed to either enable the ref trick or to not use messages at all, but until that is the case, you know that making these calls while having a long message queue is going to be expensive. We've found that it's often useful to offload such calls to temporary helper processes, that by construction have very short message queues. (The signalling between the original process and the helper should of course use the ref trick, this is non-trivial to get right.) From mailparmalat@REDACTED Fri Dec 6 21:40:22 2019 From: mailparmalat@REDACTED (Steven) Date: Fri, 6 Dec 2019 22:40:22 +0200 Subject: gen_udp:send/4 In-Reply-To: References: Message-ID: Hi, It actually took a while to build to that kind of message queue size. On the upper level, there is no selective receive and incoming messages are accumulated and then send out via udp. Generally that should be relatively fast however under load, the message queue can slowly creep up leaving that prim inet sendto selective receive becoming problematic. Thank you for your suggestions as well, obviously keeping short message queue is the ideal situation Steven On Fri, 06 Dec 2019 at 22:00, Mikael Pettersson wrote: > On Thu, Dec 5, 2019 at 5:39 PM Steven wrote: > > > > Hi, > > > > We have a situation where gen_udp:send/4 does not return ok and it just > hangs and the process is waiting indefinitely. > > > > e.g. > > {current_function,{prim_inet,sendto,4}}, > > {initial_call,{proc_lib,init_p,5}}, > > {status,running}, > > {message_queue_len,15363062}, > > > > The send buffer size on the socket is around 200KB. > > > > 3> inet:getopts(S, [sndbuf]). > > {ok,[{sndbuf,212992}]} > > > > The UDP socket doesn't receive any incoming packets so it is used for > sending only. Running R21-3 with redhat 7.5. Would like to ask the group > under which conditions will the port not come back with inet_reply? I > assume if the sndbuf is full then it should come back with enobufs or some > sort but it should be quick enough to flush the sndbuf to interface. > > > > In prim_inet:sendTo/4, it always expect inet reply from OS but it never > comes and never times out. > > > > try erlang:port_command(S, PortCommandData) of > > true -> > > receive > > {inet_reply,S,Reply} -> > > ?DBG_FORMAT( > > "prim_inet:sendto() -> ~p~n", > [Reply]), > > Reply > > end > > catch > > > > Thanks > > I believe a (or the) problem is that the selective-receive in > prim_inet:sendto/4 doesn't use the ref trick, therefore it has to scan > the entire 15+ million entry message queue of the sender process > looking for that inet_reply message. That is going to be very slow. > (Furthermore there are other performance penalties associated with > having very long message queues, see > erlang:process_flag(message_queue_data, off_heap) for one possible > remedy.) > > It would be nice if the return signalling from calling port_command > could be fixed to either enable the ref trick or to not use messages > at all, but until that is the case, you know that making these calls > while having a long message queue is going to be expensive. We've > found that it's often useful to offload such calls to temporary helper > processes, that by construction have very short message queues. (The > signalling between the original process and the helper should of > course use the ref trick, this is non-trivial to get right.) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bchesneau@REDACTED Sat Dec 7 11:25:17 2019 From: bchesneau@REDACTED (Benoit Chesneau) Date: Sat, 7 Dec 2019 11:25:17 +0100 Subject: Which versions of Erlang OTP are you using with Hackney? Message-ID: Hi all, I'm doing some refactoring in Hackney in view of improving the support of proxies and adding more features but I am wondering what is the common version to support. I have created a poll on Github for it: https://github.com/benoitc/hackney/issues/601 It would be very helpful if you can answer :) Beno?t -------------- next part -------------- An HTML attachment was scrubbed... URL: From frank.muller.erl@REDACTED Sat Dec 7 13:57:34 2019 From: frank.muller.erl@REDACTED (Frank Muller) Date: Sat, 7 Dec 2019 13:57:34 +0100 Subject: Which versions of Erlang OTP are you using with Hackney? In-Reply-To: References: Message-ID: All 21.X.Y.Z /Frank Hi all, > > I'm doing some refactoring in Hackney in view of improving the support of > proxies and adding more features but I am wondering what is the common > version to support. I have created a poll on Github for it: > > https://github.com/benoitc/hackney/issues/601 > > It would be very helpful if you can answer :) > > Beno?t > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ostinelli@REDACTED Sat Dec 7 18:54:48 2019 From: ostinelli@REDACTED (Roberto Ostinelli) Date: Sat, 7 Dec 2019 18:54:48 +0100 Subject: Exercise: ETS with secondary indices Message-ID: All, I'm doing a conceptual exercise to evaluate the usage of ETS with secondary indices. The exercise is really simple: 1. There are elements which can belong to groups, in a m-to-n relationship (every element can belong to one or more groups). 2. Elements and groups can be created/deleted at will. 3. We need to find the groups an element belongs to. 4. We need to find the elements included into a group. 5. We need to not keep an element if it does not belong to a group, nor a group if it does not have elements in it (this is to avoid memory leaks since groups / elements get added / removed). All of these operations should be optimized as much as possible. A. One possibility is to have 2 ETS tables of type set, that contain the following terms: 1. table elements with tuple format {Element, Groups} 2. table groups with tuple format {Group, Elements} Groups and Elements would be maps (so that finding an element of the map does not require traversing an array). Adding an Element to a Group would mean: - An insert in the table elements where the Group gets added to the Groups map (bonus of using a map: no duplicates can be created). - An insert in the table groups where the Element gets added to the Elements map. Retrieving the groups an element belongs to is simple as getting the Element in the table elements, the groups will be keys of the Groups map. Retrieving the elements a group contains to is simple as getting the Group in the table groups, the elements will be keys of the Elements map. Deleting is far from being optimized though. For instance, deleting an Element means: - Getting the Groups it belongs to with a lookup in the table elements. - For every Group in the Groups map, remove the Element from the Elements map. It becomes something like this: case ets:lookup(elements, Element) of [{Element, Groups}] -> ets:delete(elements, Element), lists:foreach(fun({Group}) -> case ets:lookup(groups, Group) of [{Group, Elements}] -> case maps:remove(Element, Elements) of Elements1 when map_size(Elements1) == 0 -> ets:delete(groups, Group); Elements1 -> ets:insert(groups, {Group, Elements1}) end; _ -> ok end end, maps:keys(Groups)) _ -> ok end. Ouch. Same goes for groups. And this is to delete *one* element, imagine if bigger operations need to be made. B. Another possibility would be to have 2 ETS tables of type bag, that contain the following terms: 1. table elements with tuple format {Element, Group} 2. table groups with tuple format {Group, Element} Adding an Element to a Group means: - An insert in the table elements of the tuple {Element, Group}. - An insert in the table groups of the tuple {Group, Element}. Retrieving the groups an element belongs to requires an ets:match_object/2 or similar such as: ets:match_object(elements, {Element, _ = '_'}). Retrieving the elements a group belongs to requires an ets:match_object/2 or similar such as: ets:match_object(groups, {Group, _ = '_'}). So retrieving requires the traversing of a table, but it should be relatively quick, given that the match is done on the index itself. Deleting is not particularly optimized though, because it requires the traversing of a table with elements that are not indexed. For instance, deleting an Element would look something like: ets:match_delete(elements, {Element, _ = '_'}). %% fast, indexed ets:match_delete(groups, {_ = '_', Element}). %% not fast, table is being traversed. What would *you* do? Any takers are welcome ^^_ Cheers, r. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pankratz.aaron@REDACTED Sat Dec 7 20:25:51 2019 From: pankratz.aaron@REDACTED (Aaron Pankratz) Date: Sat, 7 Dec 2019 13:25:51 -0600 Subject: Mnesia documentation Message-ID: Hi, in the Mnesia documentation has this code in section 4.3: case mnesia:wait_for_tables([a, b], 20000) of {timeout, RemainingTabs} -> panic(RemainingTabs); ok -> synced end. I get and error, "function panic/1 undefined", when I try to use it. What is that panic function? Best, Aaron From luke@REDACTED Sat Dec 7 21:34:14 2019 From: luke@REDACTED (Luke Bakken) Date: Sat, 7 Dec 2019 12:34:14 -0800 Subject: Mnesia documentation In-Reply-To: References: Message-ID: Hi Aaron, panic/1 exists and is an exported function in the "company" module in the master branch: ~/development/erlang/otp (master=) $ find . -type f -name '*.erl' -exec egrep '^panic\(' {} + ./lib/mnesia/doc/src/company.erl:panic(X) -> exit({panic, X}). This appears to be example code used in the mnesia documentation. Thanks, Luke On Sat, Dec 7, 2019 at 12:21 PM Aaron Pankratz wrote: > > Hi, in the Mnesia documentation has this code in section 4.3: > > case mnesia:wait_for_tables([a, b], 20000) of > {timeout, RemainingTabs} -> > panic(RemainingTabs); > ok -> > synced > end. > > I get and error, "function panic/1 undefined", when I try to use it. > What is that panic function? > > Best, > Aaron From pankratz.aaron@REDACTED Sat Dec 7 21:44:06 2019 From: pankratz.aaron@REDACTED (Aaron Pankratz) Date: Sat, 7 Dec 2019 14:44:06 -0600 Subject: Mnesia documentation In-Reply-To: References: Message-ID: Thank you, Luke! On Sat, Dec 7, 2019 at 2:34 PM Luke Bakken wrote: > > Hi Aaron, > > panic/1 exists and is an exported function in the "company" module in > the master branch: > > ~/development/erlang/otp (master=) > $ find . -type f -name '*.erl' -exec egrep '^panic\(' {} + > ./lib/mnesia/doc/src/company.erl:panic(X) -> exit({panic, X}). > > This appears to be example code used in the mnesia documentation. > > Thanks, > Luke > > On Sat, Dec 7, 2019 at 12:21 PM Aaron Pankratz wrote: > > > > Hi, in the Mnesia documentation has this code in section 4.3: > > > > case mnesia:wait_for_tables([a, b], 20000) of > > {timeout, RemainingTabs} -> > > panic(RemainingTabs); > > ok -> > > synced > > end. > > > > I get and error, "function panic/1 undefined", when I try to use it. > > What is that panic function? > > > > Best, > > Aaron From frank.muller.erl@REDACTED Sun Dec 8 09:29:36 2019 From: frank.muller.erl@REDACTED (Frank Muller) Date: Sun, 8 Dec 2019 09:29:36 +0100 Subject: ConETS table consistency while entries are deleted from 2 processes Message-ID: Hi all Lets assume a public ETS table of type ?ordered_set?. Every minute, a process A is deleting expired entries from it (see below). In the same time, another process B can delete any entry (ex. randomly) from this table. I?ve implemented two strategies which can be used by process A: _________________________________________ -spec purge1(pos_integer()) -> ok. purge1(Now) -> Key = ets:first(?MODULE), purge1(Now, Key). purge1(Now, Key) when Key =< Now -> true = ets:delete(?MODULE, Key), %% idempotent call purge1(Now); purge1(_, _) -> %% '$end_of_table? or Key > Now ok. or: _________________________________________ -spec purge2(pos_integer()) -> ok. purge2(Now) -> Key = ets:first(?MODULE), purge2(Now, Key). purge2(Now, Key) when Key =< Now -> Next = ets:next(?MODULE, Key), true = ets:delete(?MODULE, Key), %% idempotent call purge2(Now, Next); purge2(_, _) -> %% '$end_of_table? or Key > Now ok. _________________________________________ Question: is it safe in general to have process B deleting entries while A is running? Which strategy is more consistent: purge1/1 or purge2/1? Best /Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From bchesneau@REDACTED Sun Dec 8 11:56:48 2019 From: bchesneau@REDACTED (Benoit Chesneau) Date: Sun, 8 Dec 2019 11:56:48 +0100 Subject: Which versions of Erlang OTP are you using with Hackney? In-Reply-To: References: Message-ID: thanks for the info :) On Sat 7 Dec 2019 at 13:57 Frank Muller wrote: > All 21.X.Y.Z > > /Frank > > Hi all, >> >> I'm doing some refactoring in Hackney in view of improving the support of >> proxies and adding more features but I am wondering what is the common >> version to support. I have created a poll on Github for it: >> >> https://github.com/benoitc/hackney/issues/601 >> >> It would be very helpful if you can answer :) >> >> Beno?t >> > -- Sent from my Mobile -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Sun Dec 8 13:56:20 2019 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Sun, 8 Dec 2019 13:56:20 +0100 Subject: ConETS table consistency while entries are deleted from 2 processes In-Reply-To: References: Message-ID: On Sun, Dec 8, 2019 at 9:29 AM Frank Muller wrote: > Hi all > > > Lets assume a public ETS table of type ?ordered_set?. > > The ETS documentation defines "safe traversal". Traversal of `ordered_set` tables are always safe. That is, they will do the "right thing" if updates are done while traversal happens. For tables of the other types, you either do the whole traversal in one call, or you use the safe_fixtable/2 call to fix the table while traversal is happening. The underlying reason is that the table has an order. This means we can always drill down into the table and find the place we "were" so to speak by utilizing this order. In the case of e.g., `set` tables, we don't a priori have an order, so we are encoding something like a hash bucket and the element we reached. But if that element is gone, we don't know where we were, and there is no other structural information to go by. The fixtable calls basically postpones changes to the table in a separate buffer until the traversal is done, then replays the buffer on the table. This ensures the core table is traversal-stable. Requests to the table first checks the buffer before checking the main core table[0] [0] This model is essentially a 2 level log-structured-merge-tree (LSM tree) variant. -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From frank.muller.erl@REDACTED Sun Dec 8 15:06:52 2019 From: frank.muller.erl@REDACTED (Frank Muller) Date: Sun, 8 Dec 2019 15:06:52 +0100 Subject: ConETS table consistency while entries are deleted from 2 processes In-Reply-To: References: Message-ID: Crystal clear Jesper, thanks. is purge1/1 valid/correct? /Frank On Sun, Dec 8, 2019 at 9:29 AM Frank Muller > wrote: > >> Hi all >> >> >> Lets assume a public ETS table of type ?ordered_set?. >> >> > The ETS documentation defines "safe traversal". Traversal of `ordered_set` > tables are always safe. That is, they will do the "right thing" if updates > are done while traversal happens. > > For tables of the other types, you either do the whole traversal in one > call, or you use the safe_fixtable/2 call to fix the table while traversal > is happening. > > The underlying reason is that the table has an order. This means we can > always drill down into the table and find the place we "were" so to speak > by utilizing this order. In the case of e.g., `set` tables, we don't a > priori have an order, so we are encoding something like a hash bucket and > the element we reached. But if that element is gone, we don't know where we > were, and there is no other structural information to go by. The fixtable > calls basically postpones changes to the table in a separate buffer until > the traversal is done, then replays the buffer on the table. This ensures > the core table is traversal-stable. Requests to the table first checks the > buffer before checking the main core table[0] > > [0] This model is essentially a 2 level log-structured-merge-tree (LSM > tree) variant. > > -- > J. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Sun Dec 8 15:50:43 2019 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Sun, 8 Dec 2019 15:50:43 +0100 Subject: ConETS table consistency while entries are deleted from 2 processes In-Reply-To: References: Message-ID: I think both would work. There is also the possibility B sends a delete message to A which then deletes keys. This forces linearization in A. How many concurrent deletes you have sort-of determines if this strategy is more viable, and also in which direction the system might go in the future. If you only have a handful deletes I would just factor it through a single process since a lot of things are easier that way, typically. In any case, your Erlang/OTP version matter here. For 22.0 and onwards, you can set `{write_concurrency, true}` for ordered_set tables and expect an improvement as core count increases. Before, there is no effect and you are essentially grabbing a table lock for the whole table whenever you want to issue a write (i.e. delete). So before 22, concurrent deletes yields no performance improvement, which argues a messaging model might be simpler to reason about[0]. [0] Why? Because there is an invariant which is easy to maintain: if the process is purging, messages wait in the mailbox. So no other deletes can happen in between. If the process is deleting, it cannot be purging. On Sun, Dec 8, 2019 at 3:07 PM Frank Muller wrote: > Crystal clear Jesper, thanks. > > is purge1/1 valid/correct? > > /Frank > > On Sun, Dec 8, 2019 at 9:29 AM Frank Muller >> wrote: >> >>> Hi all >>> >>> >>> Lets assume a public ETS table of type ?ordered_set?. >>> >>> >> The ETS documentation defines "safe traversal". Traversal of >> `ordered_set` tables are always safe. That is, they will do the "right >> thing" if updates are done while traversal happens. >> >> For tables of the other types, you either do the whole traversal in one >> call, or you use the safe_fixtable/2 call to fix the table while traversal >> is happening. >> >> The underlying reason is that the table has an order. This means we can >> always drill down into the table and find the place we "were" so to speak >> by utilizing this order. In the case of e.g., `set` tables, we don't a >> priori have an order, so we are encoding something like a hash bucket and >> the element we reached. But if that element is gone, we don't know where we >> were, and there is no other structural information to go by. The fixtable >> calls basically postpones changes to the table in a separate buffer until >> the traversal is done, then replays the buffer on the table. This ensures >> the core table is traversal-stable. Requests to the table first checks the >> buffer before checking the main core table[0] >> >> [0] This model is essentially a 2 level log-structured-merge-tree (LSM >> tree) variant. >> >> -- >> J. >> > -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From frank.muller.erl@REDACTED Sun Dec 8 17:45:37 2019 From: frank.muller.erl@REDACTED (Frank Muller) Date: Sun, 8 Dec 2019 17:45:37 +0100 Subject: ConETS table consistency while entries are deleted from 2 processes In-Reply-To: References: Message-ID: Got it. I?m on the latest 22.x and gonna go with purge1 then. /Frank I think both would work. > > There is also the possibility B sends a delete message to A which then > deletes keys. This forces linearization in A. > > How many concurrent deletes you have sort-of determines if this strategy > is more viable, and also in which direction the system might go in the > future. If you only have a handful deletes I would just factor it through a > single process since a lot of things are easier that way, typically. > > In any case, your Erlang/OTP version matter here. For 22.0 and onwards, > you can set `{write_concurrency, true}` for ordered_set tables and expect > an improvement as core count increases. Before, there is no effect and you > are essentially grabbing a table lock for the whole table whenever you want > to issue a write (i.e. delete). So before 22, concurrent deletes yields no > performance improvement, which argues a messaging model might be simpler to > reason about[0]. > > [0] Why? Because there is an invariant which is easy to maintain: if the > process is purging, messages wait in the mailbox. So no other deletes can > happen in between. If the process is deleting, it cannot be purging. > > > > On Sun, Dec 8, 2019 at 3:07 PM Frank Muller > wrote: > >> Crystal clear Jesper, thanks. >> >> is purge1/1 valid/correct? >> >> /Frank >> >> On Sun, Dec 8, 2019 at 9:29 AM Frank Muller >>> wrote: >>> >>>> Hi all >>>> >>>> >>>> Lets assume a public ETS table of type ?ordered_set?. >>>> >>>> >>> The ETS documentation defines "safe traversal". Traversal of >>> `ordered_set` tables are always safe. That is, they will do the "right >>> thing" if updates are done while traversal happens. >>> >>> For tables of the other types, you either do the whole traversal in one >>> call, or you use the safe_fixtable/2 call to fix the table while traversal >>> is happening. >>> >>> The underlying reason is that the table has an order. This means we can >>> always drill down into the table and find the place we "were" so to speak >>> by utilizing this order. In the case of e.g., `set` tables, we don't a >>> priori have an order, so we are encoding something like a hash bucket and >>> the element we reached. But if that element is gone, we don't know where we >>> were, and there is no other structural information to go by. The fixtable >>> calls basically postpones changes to the table in a separate buffer until >>> the traversal is done, then replays the buffer on the table. This ensures >>> the core table is traversal-stable. Requests to the table first checks the >>> buffer before checking the main core table[0] >>> >>> [0] This model is essentially a 2 level log-structured-merge-tree (LSM >>> tree) variant. >>> >>> -- >>> J. >>> >> > > -- > J. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sverker@REDACTED Mon Dec 9 20:15:41 2019 From: sverker@REDACTED (Sverker Eriksson) Date: Mon, 9 Dec 2019 20:15:41 +0100 Subject: Exercise: ETS with secondary indices In-Reply-To: References: Message-ID: <1575918941.3373.2.camel@erlang.org> Here is a variant of your second solution. But instead insert each element-group pair as a key and use ordered_set which has an optimization for traversal with partially bound key. ets:new(elements,[ordered_set,named_table]). ets:new(groups,[ordered_set,named_table]). % Insert element [ets:insert(elements, {{Element,G}}) || G <- Groups], [ets:insert(groups, {{G,Element}}) || G <- Groups], % Lookup groups from element Groups = ets:select(elements, [{{{Element,'$1'}}, [], ['$1']}]), % Lookup elements from group Elements = ets:select(groups, [{{{Group,'$1'}}, [], ['$1']}]), % Delete element Groups = ets:select(elements, [{{{Element,'$1'}}, [], ['$1']}]), ets:match_delete(elements, {{Element, '_'}}), [ets:delete(groups, {G,Element}) || G <- Groups], All select and match traversals use a partially bound key and will only search the part of the ordered_set (tree) that may contain such keys. /Sverker On l?r, 2019-12-07 at 18:54 +0100, Roberto Ostinelli wrote: > All, > I'm doing a conceptual exercise to evaluate the usage of ETS with secondary > indices. > > The exercise is really simple: > There are elements which can belong to groups, in a m-to-n relationship (every > element can belong to one or more groups). > Elements and groups can be created/deleted at will. > We need to find the groups an element belongs to. > We need to find the elements included into a group. > We need to not keep an element if it does not belong to a group, nor a group > if it does not have elements in it (this is to avoid memory leaks since groups > / elements get added / removed). > All of these operations should be optimized as much?as possible. > > > A. One possibility is to have 2 ETS tables of type set, that contain the > following terms: > > 1. table elements with tuple format {Element, Groups} > 2. table groups with tuple format {Group, Elements} > > Groups and Elements would be maps (so that finding an element of the map does > not require traversing an array). > > Adding an Element to a Group would mean: > An insert in the table elements where the Group gets added to the Groups map > (bonus of using a map: no duplicates can be created). > An insert in the table groups where the Element gets added to the Elements > map. > Retrieving the groups an element belongs to is simple as getting the Element > in the table elements, the groups will be keys of the Groups map. > Retrieving the elements a group contains to is simple as getting the Group in > the table groups, the elements will be keys of the Elements map. > > Deleting is far from being optimized though. For instance, deleting an Element > means: > Getting the Groups it belongs to with a lookup in the table?elements. > For every Group in the Groups map, remove the Element from the Elements map. > It becomes something like this: > > case ets:lookup(elements, Element) of > ? ? [{Element, Groups}] -> > ? ? ? ? ets:delete(elements, Element), > ? ? ? ? lists:foreach(fun({Group}) -> > ? ? ? ? ? ? case ets:lookup(groups, Group) of > ? ? ? ? ? ? ? ? [{Group, Elements}] -> > ? ? ? ? ? ? ? ? ? ? case maps:remove(Element, Elements) of > ? ? ? ? ? ? ? ? ? ? ? ? Elements1 when map_size(Elements1) == 0 -> > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ets:delete(groups, Group); > ? ? ? ? ? ? ? ? ? ? ? ? Elements1 -> > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ets:insert(groups, {Group, Elements1}) > ? ? ? ? ? ? ? ? ? ? end; > ? ? ? ? ? ? ? ? _ -> > ? ? ? ? ? ? ? ? ? ? ok > ? ? ? ? ? ? end > ? ? ? ? end, maps:keys(Groups)) > ? ? _ -> > ? ? ? ? ok > end. > > Ouch. Same goes for groups. And this is to delete one element, imagine if > bigger operations need to be made. > > > B. Another possibility would be to have 2 ETS tables of type bag, that contain > the following terms: > > 1. table elements with tuple format {Element, Group} > 2. table groups with tuple format {Group, Element} > > Adding an Element to a Group means: > An insert in the table elements?of the tuple {Element, Group}. > An insert in the table groups?of the tuple?{Group, Element}. > Retrieving the groups an element belongs to requires an ets:match_object/2 or > similar such as: > ets:match_object(elements, {Element, _ = '_'}). > > Retrieving the elements a group belongs to requires an ets:match_object/2 or > similar such as: > ets:match_object(groups, {Group, _ = '_'}). > > So retrieving requires the traversing of a table, but it should be relatively > quick, given that the match is done on the index itself. > > Deleting is not particularly optimized though, because it requires the > traversing of a table with elements that are not indexed. For instance, > deleting an Element?would look something like: > ets:match_delete(elements, {Element, _ = '_'}). %% fast, indexed > ets:match_delete(groups, {_ = '_', Element}). %% not fast, table is being > traversed. > > > What would you do? Any takers are welcome ^^_ > > Cheers, > r. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From soverdor@REDACTED Tue Dec 10 02:06:48 2019 From: soverdor@REDACTED (Sam Overdorf) Date: Mon, 9 Dec 2019 17:06:48 -0800 Subject: erlang patch program Message-ID: Is there an erlang version of "otp_patch_apply"? Thanks, Sam Overdorf soverdor@REDACTED From henrik.x.nord@REDACTED Tue Dec 10 10:50:10 2019 From: henrik.x.nord@REDACTED (Henrik Nord X) Date: Tue, 10 Dec 2019 09:50:10 +0000 Subject: Patch Package OTP 22.2 Released Message-ID: <9f51751f6442178f4bd7e0a1a6800f51a7ab1f40.camel@ericsson.com> Highlights erts: * The Kernel application's User's Guide now contain a Logger Cookbook with with common usage patterns. * Numerous improvments in the new socket and net modules Standard libraries: * common_test: ct_property_test logging is improved * ssl: Correct handling of unordered chains so that it works as expected Tools: * Emacs erlang-mode funktion that lets the user open the documentation for an Erlang/OTP funktion in an Emacs buffer has been improved. Users will be asked if they want the man pages downloaded if they are not present in the system. For more details see erlang.org/download/otp_src_22.2.readme Pre built versions for Windows can be fetched here: erlang.org/download/otp_win32_22.2.exe erlang.org/download/otp_win64_22.2.exe Online documentation can be browsed here: erlang.org/doc/search/ The source tarball can be fetched here: erlang.org/download/otp_src_22.2.tar.gz The documentation can be fetched here: erlang.org/download/otp_doc_html_22.2.tar.gz The man pages can be fetched here: http://erlang.org/download/otp_doc_man_22.2.tar.gz The Erlang/OTP source can also be found at GitHub on the official Erlang repository: https://github.com/erlang/otp OTP-22.2 Thank you for all your contributions! -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc@REDACTED Tue Dec 10 12:17:58 2019 From: marc@REDACTED (Marc Worrell) Date: Tue, 10 Dec 2019 12:17:58 +0100 Subject: [ANN] Zotonic 0.53.0 released Message-ID: <866916F5-64DA-468A-9177-76CD3697A21C@worrell.nl> Hi, Zotonic is the Erlang content management system and framework. It enables you to quickly build high performance websites using a flexible data model, a rich module system and powerful template language. We have released version 0.53.0. This is a maintenance release. Most important changes in this release are: ? jQuery updated to 3.4.1 ? Fix a problem with handling email errors ? Remove a problem with Erlang module name clashes (thanks to @rl-king) ? Always repivot subject resource on edge delete/insert ? Fix a problem with displaying images with defined crop-center in the admin ? Fix a problem where the authors of resource revisions were visible on the revision list for all users See the full release notes at http://docs.zotonic.com/en/latest/developer-guide/releasenotes/rel_0.53.0.html And download at https://github.com/zotonic/zotonic/releases Work on the big master 1.0 release is progressing. Cheers, The Zotonic maintainers. From ostinelli@REDACTED Tue Dec 10 12:34:14 2019 From: ostinelli@REDACTED (Roberto Ostinelli) Date: Tue, 10 Dec 2019 12:34:14 +0100 Subject: Exercise: ETS with secondary indices In-Reply-To: <1575918941.3373.2.camel@erlang.org> References: <1575918941.3373.2.camel@erlang.org> Message-ID: Sverker, This looks very nice. I was considering using composite keys but I didn't know that partial lookups would work. Seems definitely cleaner, and if I understand correctly the lookup time will be the same as with a single key, with the added benefit of the delete functionalities (please correct me if I'm wrong). Thank you so much, r. On Mon, Dec 9, 2019 at 8:15 PM Sverker Eriksson wrote: > Here is a variant of your second solution. But instead insert each > element-group pair as a *key* > and use ordered_set which has an optimization for *traversal with > partially bound key*. > > > ets:new(elements,[ordered_set,named_table]). > ets:new(groups,[ordered_set,named_table]). > > % Insert element > [ets:insert(elements, {{Element,G}}) || G <- Groups], > [ets:insert(groups, {{G,Element}}) || G <- Groups], > > > % Lookup groups from element > Groups = ets:select(elements, [{{{Element,'$1'}}, [], ['$1']}]), > > % Lookup elements from group > Elements = ets:select(groups, [{{{Group,'$1'}}, [], ['$1']}]), > > % Delete element > Groups = ets:select(elements, [{{{Element,'$1'}}, [], ['$1']}]), > ets:match_delete(elements, {{Element, '_'}}), > [ets:delete(groups, {G,Element}) || G <- Groups], > > > All select and match traversals use a partially bound key and will > only search the part of the ordered_set (tree) that may contain such keys. > > > /Sverker > > On l?r, 2019-12-07 at 18:54 +0100, Roberto Ostinelli wrote: > > All, > I'm doing a conceptual exercise to evaluate the usage of ETS with > secondary indices. > > The exercise is really simple: > > 1. There are elements which can belong to groups, in a m-to-n > relationship (every element can belong to one or more groups). > 2. Elements and groups can be created/deleted at will. > 3. We need to find the groups an element belongs to. > 4. We need to find the elements included into a group. > 5. We need to not keep an element if it does not belong to a group, > nor a group if it does not have elements in it (this is to avoid memory > leaks since groups / elements get added / removed). > > All of these operations should be optimized as much as possible. > > > A. One possibility is to have 2 ETS tables of type set, that contain the > following terms: > > 1. table elements with tuple format {Element, Groups} > 2. table groups with tuple format {Group, Elements} > > Groups and Elements would be maps (so that finding an element of the map > does not require traversing an array). > > Adding an Element to a Group would mean: > > - An insert in the table elements where the Group gets added to the > Groups map (bonus of using a map: no duplicates can be created). > - An insert in the table groups where the Element gets added to the > Elements map. > > Retrieving the groups an element belongs to is simple as getting the > Element in the table elements, the groups will be keys of the Groups map. > Retrieving the elements a group contains to is simple as getting the Group > in the table groups, the elements will be keys of the Elements map. > > Deleting is far from being optimized though. For instance, deleting an > Element means: > > - Getting the Groups it belongs to with a lookup in the table elements. > - For every Group in the Groups map, remove the Element from the > Elements map. > > It becomes something like this: > > case ets:lookup(elements, Element) of > [{Element, Groups}] -> > ets:delete(elements, Element), > lists:foreach(fun({Group}) -> > case ets:lookup(groups, Group) of > [{Group, Elements}] -> > case maps:remove(Element, Elements) of > Elements1 when map_size(Elements1) == 0 -> > ets:delete(groups, Group); > Elements1 -> > ets:insert(groups, {Group, Elements1}) > end; > _ -> > ok > end > end, maps:keys(Groups)) > _ -> > ok > end. > > Ouch. Same goes for groups. And this is to delete *one* element, imagine > if bigger operations need to be made. > > > B. Another possibility would be to have 2 ETS tables of type bag, that > contain the following terms: > > 1. table elements with tuple format {Element, Group} > 2. table groups with tuple format {Group, Element} > > Adding an Element to a Group means: > > - An insert in the table elements of the tuple {Element, Group}. > - An insert in the table groups of the tuple {Group, Element}. > > Retrieving the groups an element belongs to requires an ets:match_object/2 > or similar such as: > ets:match_object(elements, {Element, _ = '_'}). > > Retrieving the elements a group belongs to requires an ets:match_object/2 > or similar such as: > ets:match_object(groups, {Group, _ = '_'}). > > So retrieving requires the traversing of a table, but it should be > relatively quick, given that the match is done on the index itself. > > Deleting is not particularly optimized though, because it requires the > traversing of a table with elements that are not indexed. For instance, > deleting an Element would look something like: > ets:match_delete(elements, {Element, _ = '_'}). %% fast, indexed > ets:match_delete(groups, {_ = '_', Element}). %% not fast, table is being > traversed. > > > What would *you* do? Any takers are welcome ^^_ > > Cheers, > r. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ostinelli@REDACTED Tue Dec 10 20:26:20 2019 From: ostinelli@REDACTED (Roberto Ostinelli) Date: Tue, 10 Dec 2019 20:26:20 +0100 Subject: Exercise: ETS with secondary indices In-Reply-To: References: <1575918941.3373.2.camel@erlang.org> Message-ID: Related, is there any difference in terms of performance in this case ( ordered_set) between: ets:select(groups, [{ {{Name, '$2'}, '_', '_', '_'}, [], ['$2'] }]) and ets:select(groups, [{ {{$1, '$2'}, '_', '_', '_'}, [{'=:=', '$1', Name}], ['$2'] }]) On Tue, Dec 10, 2019 at 12:34 PM Roberto Ostinelli wrote: > Sverker, > This looks very nice. I was considering using composite keys but I didn't > know that partial lookups would work. > > Seems definitely cleaner, and if I understand correctly the lookup time > will be the same as with a single key, with the added benefit of the delete > functionalities (please correct me if I'm wrong). > > Thank you so much, > r. > > > On Mon, Dec 9, 2019 at 8:15 PM Sverker Eriksson > wrote: > >> Here is a variant of your second solution. But instead insert each >> element-group pair as a *key* >> and use ordered_set which has an optimization for *traversal with >> partially bound key*. >> >> >> ets:new(elements,[ordered_set,named_table]). >> ets:new(groups,[ordered_set,named_table]). >> >> % Insert element >> [ets:insert(elements, {{Element,G}}) || G <- Groups], >> [ets:insert(groups, {{G,Element}}) || G <- Groups], >> >> >> % Lookup groups from element >> Groups = ets:select(elements, [{{{Element,'$1'}}, [], ['$1']}]), >> >> % Lookup elements from group >> Elements = ets:select(groups, [{{{Group,'$1'}}, [], ['$1']}]), >> >> % Delete element >> Groups = ets:select(elements, [{{{Element,'$1'}}, [], ['$1']}]), >> ets:match_delete(elements, {{Element, '_'}}), >> [ets:delete(groups, {G,Element}) || G <- Groups], >> >> >> All select and match traversals use a partially bound key and will >> only search the part of the ordered_set (tree) that may contain such keys. >> >> >> /Sverker >> >> On l?r, 2019-12-07 at 18:54 +0100, Roberto Ostinelli wrote: >> >> All, >> I'm doing a conceptual exercise to evaluate the usage of ETS with >> secondary indices. >> >> The exercise is really simple: >> >> 1. There are elements which can belong to groups, in a m-to-n >> relationship (every element can belong to one or more groups). >> 2. Elements and groups can be created/deleted at will. >> 3. We need to find the groups an element belongs to. >> 4. We need to find the elements included into a group. >> 5. We need to not keep an element if it does not belong to a group, >> nor a group if it does not have elements in it (this is to avoid memory >> leaks since groups / elements get added / removed). >> >> All of these operations should be optimized as much as possible. >> >> >> A. One possibility is to have 2 ETS tables of type set, that contain the >> following terms: >> >> 1. table elements with tuple format {Element, Groups} >> 2. table groups with tuple format {Group, Elements} >> >> Groups and Elements would be maps (so that finding an element of the map >> does not require traversing an array). >> >> Adding an Element to a Group would mean: >> >> - An insert in the table elements where the Group gets added to the >> Groups map (bonus of using a map: no duplicates can be created). >> - An insert in the table groups where the Element gets added to the >> Elements map. >> >> Retrieving the groups an element belongs to is simple as getting the >> Element in the table elements, the groups will be keys of the Groups map. >> Retrieving the elements a group contains to is simple as getting the >> Group in the table groups, the elements will be keys of the Elements map. >> >> Deleting is far from being optimized though. For instance, deleting an >> Element means: >> >> - Getting the Groups it belongs to with a lookup in the table elements >> . >> - For every Group in the Groups map, remove the Element from the >> Elements map. >> >> It becomes something like this: >> >> case ets:lookup(elements, Element) of >> [{Element, Groups}] -> >> ets:delete(elements, Element), >> lists:foreach(fun({Group}) -> >> case ets:lookup(groups, Group) of >> [{Group, Elements}] -> >> case maps:remove(Element, Elements) of >> Elements1 when map_size(Elements1) == 0 -> >> ets:delete(groups, Group); >> Elements1 -> >> ets:insert(groups, {Group, Elements1}) >> end; >> _ -> >> ok >> end >> end, maps:keys(Groups)) >> _ -> >> ok >> end. >> >> Ouch. Same goes for groups. And this is to delete *one* element, imagine >> if bigger operations need to be made. >> >> >> B. Another possibility would be to have 2 ETS tables of type bag, that >> contain the following terms: >> >> 1. table elements with tuple format {Element, Group} >> 2. table groups with tuple format {Group, Element} >> >> Adding an Element to a Group means: >> >> - An insert in the table elements of the tuple {Element, Group}. >> - An insert in the table groups of the tuple {Group, Element}. >> >> Retrieving the groups an element belongs to requires an >> ets:match_object/2 or similar such as: >> ets:match_object(elements, {Element, _ = '_'}). >> >> Retrieving the elements a group belongs to requires an ets:match_object/2 >> or similar such as: >> ets:match_object(groups, {Group, _ = '_'}). >> >> So retrieving requires the traversing of a table, but it should be >> relatively quick, given that the match is done on the index itself. >> >> Deleting is not particularly optimized though, because it requires the >> traversing of a table with elements that are not indexed. For instance, >> deleting an Element would look something like: >> ets:match_delete(elements, {Element, _ = '_'}). %% fast, indexed >> ets:match_delete(groups, {_ = '_', Element}). %% not fast, table is being >> traversed. >> >> >> What would *you* do? Any takers are welcome ^^_ >> >> Cheers, >> r. >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rickard@REDACTED Tue Dec 10 23:26:44 2019 From: rickard@REDACTED (Rickard Green) Date: Tue, 10 Dec 2019 23:26:44 +0100 Subject: erlang patch program In-Reply-To: References: Message-ID: tis 10 dec. 2019 kl. 02:06 skrev Sam Overdorf : > Is there an erlang version of "otp_patch_apply"? > > Thanks, > Sam Overdorf > soverdor@REDACTED > No there isn?t. Regards, Rickard -- Rickard Green, Erlang/OTP, Ericsson AB -------------- next part -------------- An HTML attachment was scrubbed... URL: From henrique4win@REDACTED Wed Dec 11 01:01:19 2019 From: henrique4win@REDACTED (Henrique Marcomini) Date: Tue, 10 Dec 2019 21:01:19 -0300 Subject: Idea about implementing a SECCOMP alike mechanism in BEAM Message-ID: Hi, I've been working with erlang/elixir for the past year and I really miss SECCOMP like features in the BEAM. So I started an implementation based on https://github.com/erlang/otp and I wanted to know what you people think about it. The idea is to provide a way to blacklist/whitelist function calls on a process level, so for example if I'm evaluating human input for a calculation, I can guarantee that no other function except the ones that are necessary are called. Going further, in a case where my erlang cookie is leaked, I know that only a limited set of functions are callable using rpc or node.spawn/2. The way I envision it (and I'm implementing it) is adding a byte to the process struct with the following meaning: 0 1 2 3 4 5 6 7 +-+-+-+-+-+-+-+-+ |E|M|S|I|U|U|U|U| +-+-+-+-+-+-+-+-+ Where: E -> Whether the mechanism is active (0:Off/1:On) M -> Operation Mode (0:Whitelist/1:Blacklist) S -> Disable Spawn (0:Can spawn new process/1:Cannot spawn new process) I -> Whether a child process will inherit the U -> Unused There are some implicit rule in this byte: - M,S, and I are unused whether E is set to 0 - I is unused if S is set to 1 I choosed to use a byte because bitwise operation are cheap and are the least expensive way I could think, and bitmasks can be combined in a meaningful way. The verification of this byte would occur at the apply function, so we can check the byte every time a function is called. To know which function is whitelisted/blacklisted I added an Eterm to the process struct. This Eterm is a NIL terminated list of tuples, each tuple contains two atoms representing the module name and the function name which is whitelisted/blacklisted. Probably a hashmap, or a binary tree of hashs would be quicker to search. But I don't know if there is any good low level way to introduce it without adding a lot of code to the code base. To implement process inherit capabilites, I added a verification on spawn, but there are some possible bypasses that would need to be treated latter on. For example: ------------------------------------------------------------------------------- If there is a process running as a dynamic supervisor (P1), some other process (P2) may send a message to spawn some worker (P5) and the father process would be the supervisor (P1), which may not have the mechanism active. Diagram below: | | | P1 - Dynamic Supervisor | P2 (With active mechanism) V | =============== | | P3 | P4 V V V When P2 asks P1 to spawn a new worker, the diagram will look like the following: | | | P1 - Dynamic Supervisor | P2 (With active mechanism) V | =============== | | P3 | P4 | P5 V V V V Where P3, P4, and P5 are spawned with P1 as a parent, so it will not inherit any rules from P2. At this point P5 can execute any code and send a message to P2, bypassing the mechanism. ------------------------------------------------------------------------------- On another case a process (P1) on Node 1 which is under this mechanism may spawn another process (P2) to Node 2, and then P1 spawns another process (P3) on Node 1. If process generated by spawns of other nodes are less secure than the process that called the spawn function, it will lead to privilege escalation. Diagram below: +---------------+ +---------------+ | | | | | Node 1 | | Node 2 | | | | | | P1 | spawn/2 | | | ------------> -------------> --------+ | | calls | | | | | | | | P2 | | | | | | | P3 | spawn/2 | calls | | | <---------- <------------- <---------+ | | | | | +---------------+ +---------------+ If P3 restrictions are less strict than P1, then P1 escalated privilege. ------------------------------------------------------------------------------- The code that I'm working is at https://github.com/Supitto/OTP/tree/maint and it is built upon the maint branch. It still quite imature and have some edges to trim (I'm still figthing with allocations and Eterms). But if this idea is apreciated I will implement everything on the main branch. Also if you can think of some other scenario where this mechanism is defeated, please infome me :D Thanks, Henrique Almeida Marcomini Telegram -> @supitto IRC (freenode) -> Supitto Ps. The code on the repo may be not working (depends on the commit), but the idea is there. Pps. I made everything in ASCII so to see it properly use monospace fonts -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukas@REDACTED Wed Dec 11 08:38:38 2019 From: lukas@REDACTED (Lukas Larsson) Date: Wed, 11 Dec 2019 08:38:38 +0100 Subject: Idea about implementing a SECCOMP alike mechanism in BEAM In-Reply-To: References: Message-ID: Hello, It has been a while since it was published but maybe you will find this interesting: http://www.erlang.se/publications/xjobb/0109-naeser.pdf Lukas On Wed, Dec 11, 2019 at 6:07 AM Henrique Marcomini wrote: > Hi, > > I've been working with erlang/elixir for the past year and I really miss > SECCOMP like features in the BEAM. So I started an implementation based on > https://github.com/erlang/otp and I wanted to know what you people think > about > it. > > The idea is to provide a way to blacklist/whitelist function calls on a > process level, so for example if I'm evaluating human input for a > calculation, > I can guarantee that no other function except the ones that are necessary > are > called. Going further, in a case where my erlang cookie is leaked, I know > that only a limited set of functions are callable using rpc or > node.spawn/2. > > The way I envision it (and I'm implementing it) is adding a byte to the > process struct with the following meaning: > > 0 1 2 3 4 5 6 7 > +-+-+-+-+-+-+-+-+ > |E|M|S|I|U|U|U|U| > +-+-+-+-+-+-+-+-+ > > Where: > E -> Whether the mechanism is active (0:Off/1:On) > M -> Operation Mode (0:Whitelist/1:Blacklist) > S -> Disable Spawn (0:Can spawn new process/1:Cannot spawn new process) > I -> Whether a child process will inherit the > U -> Unused > > There are some implicit rule in this byte: > - M,S, and I are unused whether E is set to 0 > - I is unused if S is set to 1 > > I choosed to use a byte because bitwise operation are cheap and are the > least > expensive way I could think, and bitmasks can be combined in a meaningful > way. > > The verification of this byte would occur at the apply function, so we > can > check the byte every time a function is called. To know which function is > whitelisted/blacklisted I added an Eterm to the process struct. This Eterm > is a > NIL terminated list of tuples, each tuple contains two atoms representing > the > module name and the function name which is whitelisted/blacklisted. > > Probably a hashmap, or a binary tree of hashs would be quicker to search. > But I don't know if there is any good low level way to introduce it without > adding a lot of code to the code base. > > To implement process inherit capabilites, I added a verification on > spawn, > but there are some possible bypasses that would need to be treated latter > on. > > For example: > > > ------------------------------------------------------------------------------- > > If there is a process running as a dynamic supervisor (P1), some other > process (P2) may send a message to spawn some worker (P5) and the father > process would be the supervisor (P1), which may not have the mechanism > active. > > Diagram below: > > > | | > | P1 - Dynamic Supervisor | P2 (With active mechanism) > V | > =============== | > | P3 | P4 V > V V > > When P2 asks P1 to spawn a new worker, the diagram will look like the > following: > > > | | > | P1 - Dynamic Supervisor | P2 (With active mechanism) > V | > =============== | > | P3 | P4 | P5 V > V V V > > Where P3, P4, and P5 are spawned with P1 as a parent, so it will not > inherit > any rules from P2. At this point P5 can execute any code and send a > message to > P2, bypassing the mechanism. > > > ------------------------------------------------------------------------------- > > On another case a process (P1) on Node 1 which is under this mechanism > may > spawn another process (P2) to Node 2, and then P1 spawns another process > (P3) > on Node 1. If process generated by spawns of other nodes are less secure > than > the process that called the spawn function, it will lead to privilege > escalation. > > Diagram below: > > +---------------+ +---------------+ > | | | | > | Node 1 | | Node 2 | > | | | | > | P1 | spawn/2 | | > | ------------> -------------> --------+ | > | calls | | | | > | | | | P2 | > | | | | | > | P3 | spawn/2 | calls | | > | <---------- <------------- <---------+ | > | | | | > +---------------+ +---------------+ > > If P3 restrictions are less strict than P1, then P1 escalated privilege. > > > ------------------------------------------------------------------------------- > > The code that I'm working is at > https://github.com/Supitto/OTP/tree/maint > and it is built upon the maint branch. It still quite imature and have some > edges to trim (I'm still figthing with allocations and Eterms). But if this > idea is apreciated I will implement everything on the main branch. Also if > you can think of some other scenario where this mechanism is defeated, > please > infome me :D > > Thanks, > > Henrique Almeida Marcomini > > Telegram -> @supitto > IRC (freenode) -> Supitto > > Ps. The code on the repo may be not working (depends on the commit), but > the > idea is there. > > Pps. I made everything in ASCII so to see it properly use monospace fonts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sverker@REDACTED Wed Dec 11 15:06:05 2019 From: sverker@REDACTED (Sverker Eriksson) Date: Wed, 11 Dec 2019 15:06:05 +0100 Subject: Exercise: ETS with secondary indices In-Reply-To: References: <1575918941.3373.2.camel@erlang.org> Message-ID: <1576073165.5553.18.camel@erlang.org> The ordered_set optimization?traversal with partially bound key does only trigger on the MatchHead tuple and not the MatchConditions list. So both the select calls will have the same result, but the second one will traverse the entire search tree. With a partially bound key like {Name, '_'} the total "lookup time" is roughly ?the same as doing one lookup per matching key. With a partially bound key like {Name, '_', Other} the the total lookup time may be longer as it have to visit all keys matching {Name, '_', '_'} and check if Other also matches. /Sverker On tis, 2019-12-10 at 20:26 +0100, Roberto Ostinelli wrote: > Related, is there any difference in terms of performance in this case > (ordered_set) between: > > ets:select(groups, [{ > ? ? {{Name, '$2'}, '_', '_', '_'}, > ? ? [], > ? ? ['$2'] > }]) > > and > > ets:select(groups, [{ > ? ? {{$1, '$2'}, '_', '_', '_'}, > ? ? [{'=:=', '$1', Name}], > ? ? ['$2'] > }]) > > On Tue, Dec 10, 2019 at 12:34 PM Roberto Ostinelli > wrote: > > Sverker, > > This looks very nice. I was considering using composite keys but I didn't > > know that partial lookups would work. > > > > Seems definitely cleaner, and if I understand correctly the lookup time will > > be the same as with a single key, with the added benefit of the delete > > functionalities (please correct me if I'm wrong). > > > > Thank you so much, > > r. > > > > > > On Mon, Dec 9, 2019 at 8:15 PM Sverker Eriksson wrote: > > > Here is a variant of your second solution. But instead insert each > > > element-group pair as a key > > > and use ordered_set which has an optimization for traversal with partially > > > bound key. > > > > > > > > > ets:new(elements,[ordered_set,named_table]). > > > ets:new(groups,[ordered_set,named_table]). > > > > > > % Insert element > > > [ets:insert(elements, {{Element,G}}) || G <- Groups], > > > [ets:insert(groups, {{G,Element}}) || G <- Groups], > > > > > > > > > % Lookup groups from element > > > Groups = ets:select(elements, [{{{Element,'$1'}}, [], ['$1']}]), > > > > > > % Lookup elements from group > > > Elements = ets:select(groups, [{{{Group,'$1'}}, [], ['$1']}]), > > > > > > % Delete element > > > Groups = ets:select(elements, [{{{Element,'$1'}}, [], ['$1']}]), > > > ets:match_delete(elements, {{Element, '_'}}), > > > [ets:delete(groups, {G,Element}) || G <- Groups], > > > > > > > > > All select and match traversals use a partially bound key and will > > > only search the part of the ordered_set (tree) that may contain such keys. > > > > > > > > > /Sverker > > > > > > On l?r, 2019-12-07 at 18:54 +0100, Roberto Ostinelli wrote: > > > > All, > > > > I'm doing a conceptual exercise to evaluate the usage of ETS with > > > > secondary indices. > > > > > > > > The exercise is really simple: > > > > There are elements which can belong to groups, in a m-to-n relationship > > > > (every element can belong to one or more groups). > > > > Elements and groups can be created/deleted at will. > > > > We need to find the groups an element belongs to. > > > > We need to find the elements included into a group. > > > > We need to not keep an element if it does not belong to a group, nor a > > > > group if it does not have elements in it (this is to avoid memory leaks > > > > since groups / elements get added / removed). > > > > All of these operations should be optimized as much?as possible. > > > > > > > > > > > > A. One possibility is to have 2 ETS tables of type set, that contain the > > > > following terms: > > > > > > > > 1. table elements with tuple format {Element, Groups} > > > > 2. table groups with tuple format {Group, Elements} > > > > > > > > Groups and Elements would be maps (so that finding an element of the map > > > > does not require traversing an array). > > > > > > > > Adding an Element to a Group would mean: > > > > An insert in the table elements where the Group gets added to the Groups > > > > map (bonus of using a map: no duplicates can be created). > > > > An insert in the table groups where the Element gets added to the > > > > Elements map. > > > > Retrieving the groups an element belongs to is simple as getting the > > > > Element in the table elements, the groups will be keys of the Groups > > > > map. > > > > Retrieving the elements a group contains to is simple as getting the > > > > Group in the table groups, the elements will be keys of the Elements > > > > map. > > > > > > > > Deleting is far from being optimized though. For instance, deleting an > > > > Element means: > > > > Getting the Groups it belongs to with a lookup in the table?elements. > > > > For every Group in the Groups map, remove the Element from the Elements > > > > map. > > > > It becomes something like this: > > > > > > > > case ets:lookup(elements, Element) of > > > > ? ? [{Element, Groups}] -> > > > > ? ? ? ? ets:delete(elements, Element), > > > > ? ? ? ? lists:foreach(fun({Group}) -> > > > > ? ? ? ? ? ? case ets:lookup(groups, Group) of > > > > ? ? ? ? ? ? ? ? [{Group, Elements}] -> > > > > ? ? ? ? ? ? ? ? ? ? case maps:remove(Element, Elements) of > > > > ? ? ? ? ? ? ? ? ? ? ? ? Elements1 when map_size(Elements1) == 0 -> > > > > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ets:delete(groups, Group); > > > > ? ? ? ? ? ? ? ? ? ? ? ? Elements1 -> > > > > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ets:insert(groups, {Group, Elements1}) > > > > ? ? ? ? ? ? ? ? ? ? end; > > > > ? ? ? ? ? ? ? ? _ -> > > > > ? ? ? ? ? ? ? ? ? ? ok > > > > ? ? ? ? ? ? end > > > > ? ? ? ? end, maps:keys(Groups)) > > > > ? ? _ -> > > > > ? ? ? ? ok > > > > end. > > > > > > > > Ouch. Same goes for groups. And this is to delete one element, imagine > > > > if bigger operations need to be made. > > > > > > > > > > > > B. Another possibility would be to have 2 ETS tables of type bag, that > > > > contain the following terms: > > > > > > > > 1. table elements with tuple format {Element, Group} > > > > 2. table groups with tuple format {Group, Element} > > > > > > > > Adding an Element to a Group means: > > > > An insert in the table elements?of the tuple {Element, Group}. > > > > An insert in the table groups?of the tuple?{Group, Element}. > > > > Retrieving the groups an element belongs to requires an > > > > ets:match_object/2 or similar such as: > > > > ets:match_object(elements, {Element, _ = '_'}). > > > > > > > > Retrieving the elements a group belongs to requires an > > > > ets:match_object/2 or similar such as: > > > > ets:match_object(groups, {Group, _ = '_'}). > > > > > > > > So retrieving requires the traversing of a table, but it should be > > > > relatively quick, given that the match is done on the index itself. > > > > > > > > Deleting is not particularly optimized though, because it requires the > > > > traversing of a table with elements that are not indexed. For instance, > > > > deleting an Element?would look something like: > > > > ets:match_delete(elements, {Element, _ = '_'}). %% fast, indexed > > > > ets:match_delete(groups, {_ = '_', Element}). %% not fast, table is > > > > being traversed. > > > > > > > > > > > > What would you do? Any takers are welcome ^^_ > > > > > > > > Cheers, > > > > r. > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From comptekki@REDACTED Thu Dec 12 04:11:26 2019 From: comptekki@REDACTED (Wes James) Date: Wed, 11 Dec 2019 20:11:26 -0700 Subject: diff between release tar.gz files Message-ID: What is the difference between: OTP-22.2-bundle.tar.gz and Source code (tar.gz) (OTP-22.2.tar.gz) at https://github.com/erlang/otp/releases Thanks, -Wes -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukas@REDACTED Thu Dec 12 10:48:12 2019 From: lukas@REDACTED (Lukas Larsson) Date: Thu, 12 Dec 2019 10:48:12 +0100 Subject: erlang:open_port compatability Message-ID: Hello, While looking into a bug on Windows related to os:cmd/1 I found that the behaviour of erlang:open_port when spawning port programs is somewhat strange when it comes to the handling of stdin. In the current implementation if you do this on Unix: > erlang:open_port({spawn,"erl"},[in]). You will get into a very strange state where the parent erl and the child erl both compete for the input characters put into the terminal. If you do it on Windows the child erl will just spin forever. So, I'm thinking of making it so that when not supplying 'out' to erlang:open_port the stdin fd will be closed in the child program or alternatively point to /dev/null. This will more mimic Windows behaviour (though there will be no spinning) and makes more sense in my opinion. However, with the change, if you want to spawn an erlang child you need to give it the -noinput argument as otherwise, it will exit immediately. So the question for you all, will this change break any of your code? Lukas Erlang/OTP team -------------- next part -------------- An HTML attachment was scrubbed... URL: From frank.muller.erl@REDACTED Thu Dec 12 19:43:18 2019 From: frank.muller.erl@REDACTED (Frank Muller) Date: Thu, 12 Dec 2019 19:43:18 +0100 Subject: inet buffer size for TCP Message-ID: Hi all The inet ( http://erlang.org/doc/man/inet.html ) documentation states : {buffer, Size} The size of the user-level buffer used by the driver. Not to be confused with options sndbuf and recbuf, which correspond to the Kernel socket buffers. For TCP it is recommended to have val(buffer) >= val(recbuf) to avoid performance issues because of unnecessary copying. For UDP [...] Question: which is best here: val(buffer) = val(recbuf) val(buffer) = val(recbuf) + (1/4 *val(recbuf)) val(buffer) = val(recbuf) + (2/4 *val(recbuf)) val(buffer) = val(recbuf) + (3/4 *val(recbuf)) val(buffer) = 2 * val(recbuf) Is there any optimal value? /Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukas@REDACTED Fri Dec 13 09:27:54 2019 From: lukas@REDACTED (Lukas Larsson) Date: Fri, 13 Dec 2019 09:27:54 +0100 Subject: inet buffer size for TCP In-Reply-To: References: Message-ID: On Thu, Dec 12, 2019 at 7:43 PM Frank Muller wrote: > Hi all > > The inet ( > http://erlang.org/doc/man/inet.html > ) documentation states : > > {buffer, Size} > > The size of the user-level buffer used by the driver. Not to be > confused with options sndbuf and recbuf, which correspond to the Kernel > socket buffers. For TCP it is recommended to have val(buffer) >= > val(recbuf) to avoid performance issues because of unnecessary copying. For > UDP [...] > > > Question: which is best here: > > val(buffer) = val(recbuf) > > val(buffer) = val(recbuf) + (1/4 *val(recbuf)) > > val(buffer) = val(recbuf) + (2/4 *val(recbuf)) > > val(buffer) = val(recbuf) + (3/4 *val(recbuf)) > > val(buffer) = 2 * val(recbuf) > > Is there any optimal value? > Not a general optimal value. It will depend on what data you are sending and which packet mode you are using. If not using any packet mode (aka raw), then I would say that "val(buffer) = val(recbuf)" should be the best option. > > /Frank > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frank.muller.erl@REDACTED Fri Dec 13 09:43:39 2019 From: frank.muller.erl@REDACTED (Frank Muller) Date: Fri, 13 Dec 2019 09:43:39 +0100 Subject: inet buffer size for TCP In-Reply-To: References: Message-ID: Ok, thanks. /Frank > > On Thu, Dec 12, 2019 at 7:43 PM Frank Muller > wrote: > >> Hi all >> >> The inet ( >> http://erlang.org/doc/man/inet.html >> ) documentation states : >> >> {buffer, Size} >> >> The size of the user-level buffer used by the driver. Not to be >> confused with options sndbuf and recbuf, which correspond to the Kernel >> socket buffers. For TCP it is recommended to have val(buffer) >= >> val(recbuf) to avoid performance issues because of unnecessary copying. For >> UDP [...] >> >> >> Question: which is best here: >> >> val(buffer) = val(recbuf) >> >> val(buffer) = val(recbuf) + (1/4 *val(recbuf)) >> >> val(buffer) = val(recbuf) + (2/4 *val(recbuf)) >> >> val(buffer) = val(recbuf) + (3/4 *val(recbuf)) >> >> val(buffer) = 2 * val(recbuf) >> >> Is there any optimal value? >> > > Not a general optimal value. It will depend on what data you are sending > and which packet mode you are using. If not using any packet mode (aka > raw), then I would say that "val(buffer) = val(recbuf)" should be the best > option. > > >> >> /Frank >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sperber@REDACTED Fri Dec 13 17:30:54 2019 From: sperber@REDACTED (Michael Sperber) Date: Fri, 13 Dec 2019 17:30:54 +0100 Subject: Call for Participation: BOB 2020 (February 28, Berlin) Message-ID: ================================================================================ BOB 2020 Conference ?What happens if we simply use what?s best?? February 28, 2020, Berlin http://bobkonf.de/2020/ Program: http://bobkonf.de/2020/en/program.html Registration: http://bobkonf.de/2020/en/registration.html ================================================================================ BOB is the conference for developers, architects and decision-makers to explore technologies beyond the mainstream in software development, and to find the best tools available to software developers today. Our goal is for all participants of BOB to return home with new insights that enable them to improve their own software development experiences. The program features 14 talks and 8 tutorials on current topics: http://bobkonf.de/2020/en/program.html The subject range of talks includes functional programming, formal methods, architecture documentation, functional-reactive programming, and language design. The tutorials feature introductions to Idris, Haskell, F#, TLA+, ReasonML, and probabilistic programming. Heather Miller will give the keynote talk. Registration is open online: http://bobkonf.de/2020/en/registration.html NOTE: The early-bird rates expire on February 19, 2020! BOB cooperates with the :clojureD conference on the day after BOB: https://clojured.de/ From eric.pailleau@REDACTED Sat Dec 14 13:40:38 2019 From: eric.pailleau@REDACTED (PAILLEAU Eric) Date: Sat, 14 Dec 2019 13:40:38 +0100 Subject: [ANN] geas 2.5 (Erlang 22.2) Message-ID: <86bb0bc1-7967-b430-9e01-3047660dd2bb@wanadoo.fr> Hi Geas 2.5 has been released ! Geas is a tool detecting the runnable official Erlang release window for your project. Geas will tell you also : - what are the offending functions in the beam/source files that reduce the available window. - if some beam files are compiled native. - the installed patches and recommend patches that should be installed depending your code. Geas is available as a module, erlang.mk and rebar 2/3 plugins. Changelog : - Update for OTP 22.2 detection and database - Code re-manufacturing, speed increase on big project - *New features* : - versions are now displayed on right side of output - semver OTP range can be set with new variable GEAS_RANGE - plugins are exiting non zero on error (for better CI integration) *Potential incompatibility* with script using geas, as now plugins are exiting non zero on error. https://github.com/crownedgrouse/geas https://github.com/crownedgrouse/geas/releases/tag/2.5 https://github.com/crownedgrouse/geas/wiki/API-changelog#release--25 What changed in OTP in 22.2 compared to 22.1 https://raw.githubusercontent.com/crownedgrouse/geas_devel/master/doc/reldiffs/yaml/22.1%7E22.2 Cheers ! Eric From frank.muller.erl@REDACTED Sat Dec 14 14:59:54 2019 From: frank.muller.erl@REDACTED (Frank Muller) Date: Sat, 14 Dec 2019 14:59:54 +0100 Subject: [ANN] Zotonic 0.53.0 released In-Reply-To: <866916F5-64DA-468A-9177-76CD3697A21C@worrell.nl> References: <866916F5-64DA-468A-9177-76CD3697A21C@worrell.nl> Message-ID: Congrats Marc!!! Hi, > > Zotonic is the Erlang content management system and framework. > > It enables you to quickly build high performance websites using a > flexible data model, a rich module system and powerful template > language. > > We have released version 0.53.0. > > This is a maintenance release. > > Most important changes in this release are: > > ? jQuery updated to 3.4.1 > ? Fix a problem with handling email errors > ? Remove a problem with Erlang module name clashes (thanks to @rl-king) > ? Always repivot subject resource on edge delete/insert > ? Fix a problem with displaying images with defined crop-center in the > admin > ? Fix a problem where the authors of resource revisions were visible on > the revision list for all users > > See the full release notes at > http://docs.zotonic.com/en/latest/developer-guide/releasenotes/rel_0.53.0.html > > And download at https://github.com/zotonic/zotonic/releases > > Work on the big master 1.0 release is progressing. > > Cheers, > > The Zotonic maintainers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ostinelli@REDACTED Sun Dec 15 18:52:54 2019 From: ostinelli@REDACTED (Roberto Ostinelli) Date: Sun, 15 Dec 2019 18:52:54 +0100 Subject: Exercise: ETS with secondary indices In-Reply-To: <1576073165.5553.18.camel@erlang.org> References: <1575918941.3373.2.camel@erlang.org> <1576073165.5553.18.camel@erlang.org> Message-ID: Thank you Sverker! On Wed, Dec 11, 2019 at 3:06 PM Sverker Eriksson wrote: > The ordered_set optimization *traversal with partially bound key* > does only trigger on the MatchHead tuple and not the MatchConditions list. > > So both the select calls will have the same result, but the second one > will traverse the entire search tree. > > > With a partially bound key like {Name, '_'} the total "lookup time" is > roughly the same as doing one lookup per matching key. > > With a partially bound key like {Name, '_', Other} the the total lookup > time may be longer > as it have to visit all keys matching {Name, '_', '_'} and check if Other > also matches. > > /Sverker > > On tis, 2019-12-10 at 20:26 +0100, Roberto Ostinelli wrote: > > Related, is there any difference in terms of performance in this case ( > ordered_set) between: > > ets:select(groups, [{ > {{Name, '$2'}, '_', '_', '_'}, > [], > ['$2'] > }]) > > and > > ets:select(groups, [{ > {{$1, '$2'}, '_', '_', '_'}, > [{'=:=', '$1', Name}], > ['$2'] > }]) > > On Tue, Dec 10, 2019 at 12:34 PM Roberto Ostinelli > wrote: > > Sverker, > This looks very nice. I was considering using composite keys but I didn't > know that partial lookups would work. > > Seems definitely cleaner, and if I understand correctly the lookup time > will be the same as with a single key, with the added benefit of the delete > functionalities (please correct me if I'm wrong). > > Thank you so much, > r. > > > On Mon, Dec 9, 2019 at 8:15 PM Sverker Eriksson > wrote: > > Here is a variant of your second solution. But instead insert each > element-group pair as a *key* > and use ordered_set which has an optimization for *traversal with > partially bound key*. > > > ets:new(elements,[ordered_set,named_table]). > ets:new(groups,[ordered_set,named_table]). > > % Insert element > [ets:insert(elements, {{Element,G}}) || G <- Groups], > [ets:insert(groups, {{G,Element}}) || G <- Groups], > > > % Lookup groups from element > Groups = ets:select(elements, [{{{Element,'$1'}}, [], ['$1']}]), > > % Lookup elements from group > Elements = ets:select(groups, [{{{Group,'$1'}}, [], ['$1']}]), > > % Delete element > Groups = ets:select(elements, [{{{Element,'$1'}}, [], ['$1']}]), > ets:match_delete(elements, {{Element, '_'}}), > [ets:delete(groups, {G,Element}) || G <- Groups], > > > All select and match traversals use a partially bound key and will > only search the part of the ordered_set (tree) that may contain such keys. > > > /Sverker > > On l?r, 2019-12-07 at 18:54 +0100, Roberto Ostinelli wrote: > > All, > I'm doing a conceptual exercise to evaluate the usage of ETS with > secondary indices. > > The exercise is really simple: > > 1. There are elements which can belong to groups, in a m-to-n > relationship (every element can belong to one or more groups). > 2. Elements and groups can be created/deleted at will. > 3. We need to find the groups an element belongs to. > 4. We need to find the elements included into a group. > 5. We need to not keep an element if it does not belong to a group, > nor a group if it does not have elements in it (this is to avoid memory > leaks since groups / elements get added / removed). > > All of these operations should be optimized as much as possible. > > > A. One possibility is to have 2 ETS tables of type set, that contain the > following terms: > > 1. table elements with tuple format {Element, Groups} > 2. table groups with tuple format {Group, Elements} > > Groups and Elements would be maps (so that finding an element of the map > does not require traversing an array). > > Adding an Element to a Group would mean: > > - An insert in the table elements where the Group gets added to the > Groups map (bonus of using a map: no duplicates can be created). > - An insert in the table groups where the Element gets added to the > Elements map. > > Retrieving the groups an element belongs to is simple as getting the > Element in the table elements, the groups will be keys of the Groups map. > Retrieving the elements a group contains to is simple as getting the Group > in the table groups, the elements will be keys of the Elements map. > > Deleting is far from being optimized though. For instance, deleting an > Element means: > > - Getting the Groups it belongs to with a lookup in the table elements. > - For every Group in the Groups map, remove the Element from the > Elements map. > > It becomes something like this: > > case ets:lookup(elements, Element) of > [{Element, Groups}] -> > ets:delete(elements, Element), > lists:foreach(fun({Group}) -> > case ets:lookup(groups, Group) of > [{Group, Elements}] -> > case maps:remove(Element, Elements) of > Elements1 when map_size(Elements1) == 0 -> > ets:delete(groups, Group); > Elements1 -> > ets:insert(groups, {Group, Elements1}) > end; > _ -> > ok > end > end, maps:keys(Groups)) > _ -> > ok > end. > > Ouch. Same goes for groups. And this is to delete *one* element, imagine > if bigger operations need to be made. > > > B. Another possibility would be to have 2 ETS tables of type bag, that > contain the following terms: > > 1. table elements with tuple format {Element, Group} > 2. table groups with tuple format {Group, Element} > > Adding an Element to a Group means: > > - An insert in the table elements of the tuple {Element, Group}. > - An insert in the table groups of the tuple {Group, Element}. > > Retrieving the groups an element belongs to requires an ets:match_object/2 > or similar such as: > ets:match_object(elements, {Element, _ = '_'}). > > Retrieving the elements a group belongs to requires an ets:match_object/2 > or similar such as: > ets:match_object(groups, {Group, _ = '_'}). > > So retrieving requires the traversing of a table, but it should be > relatively quick, given that the match is done on the index itself. > > Deleting is not particularly optimized though, because it requires the > traversing of a table with elements that are not indexed. For instance, > deleting an Element would look something like: > ets:match_delete(elements, {Element, _ = '_'}). %% fast, indexed > ets:match_delete(groups, {_ = '_', Element}). %% not fast, table is being > traversed. > > > What would *you* do? Any takers are welcome ^^_ > > Cheers, > r. > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From duncan.attard.01@REDACTED Mon Dec 16 13:02:35 2019 From: duncan.attard.01@REDACTED (Duncan Paul Attard) Date: Mon, 16 Dec 2019 13:02:35 +0100 Subject: Obtain message content of Erlang messages in dtrace Message-ID: <498EF4E8-0C4E-4C3C-82C3-F509C6D16715@um.edu.mt> Hi, I was experimenting with Erlang and dtrace, and am interested to know whether the message content exchanged between two Erlang processes can be obtained. In particular, I am interested in the `message-send` and `message-receive` probes. I looked at erlang_dtrace.d (https://github.com/erlang/otp/blob/master/erts/emulator/beam/erlang_dtrace.d) as well as messages.d (https://github.com/erlang/otp/blob/master/lib/runtime_tools/examples/messages.d) to see whether this is possible, but I was not able to make any progress. Is there a way in which this can be achieved. And if not, are there any alternatives? Thanks, Duncan From duncan.attard.01@REDACTED Mon Dec 16 13:16:15 2019 From: duncan.attard.01@REDACTED (Duncan Paul Attard) Date: Mon, 16 Dec 2019 13:16:15 +0100 Subject: Obtain message content of Erlang messages in dtrace In-Reply-To: References: <498EF4E8-0C4E-4C3C-82C3-F509C6D16715@um.edu.mt> Message-ID: <5EA32E85-3F60-4B51-BF6F-8C887CDF5EE5@um.edu.mt> Thanks a lot! Yes, currently I?m using erlang:trace/3, but was curious whether the same thing can be achieved `externally` via dtrace (or even LTTng). Does `Dtrace is meant more to observe VM itself, not to trace you application.` mean that the message content cannot be retrieved though? Duncan > On 16 Dec 2019, at 13:10, ?ukasz Niemier wrote: > >> I was experimenting with Erlang and dtrace, and am interested to know whether the message content exchanged between two Erlang processes can be obtained. In particular, I am interested in the `message-send` and `message-receive` probes. > > Dtrace is meant more to observe VM itself, not to trace you application. > >> Is there a way in which this can be achieved. And if not, are there any alternatives? > > For tracing your own application you have erlang:trace/3, seq_trace, dbg, dyntrace, etc. If you are more interested in tracing and debugging applications in Erlang then you should check https://www.erlang-in-anger.com > > -- > > ?ukasz Niemier > lukasz@REDACTED > > > > > From otp@REDACTED Thu Dec 19 12:05:55 2019 From: otp@REDACTED (Erlang/OTP) Date: Thu, 19 Dec 2019 12:05:55 +0100 (CET) Subject: Patch Package OTP 22.2.1 Released Message-ID: <20191219110555.E24312546A3@hel.cslab.ericsson.net> Patch Package: OTP 22.2.1 Git Tag: OTP-22.2.1 Date: 2019-12-19 Trouble Report Id: OTP-16314, OTP-16349, OTP-16357, OTP-16359, OTP-16360, OTP-16361 Seq num: ERIERL-444, ERIERL-451, ERL-1098, ERL-1166 System: OTP Release: 22 Application: erts-10.6.1, snmp-5.4.5, ssl-9.5.1 Predecessor: OTP 22.2 Check out the git tag OTP-22.2.1, and build a full OTP system including documentation. Apply one or more applications from this build as patches to your installation using the 'otp_patch_apply' tool. For information on install requirements, see descriptions for each application version below. --------------------------------------------------------------------- --- erts-10.6.1 ----------------------------------------------------- --------------------------------------------------------------------- Note! The erts-10.6.1 application *cannot* be applied independently of other applications on an arbitrary OTP 22 installation. On a full OTP 22 installation, also the following runtime dependency has to be satisfied: -- kernel-6.5.1 (first satisfied in OTP 22.2) --- Fixed Bugs and Malfunctions --- OTP-16314 Application(s): erts Related Id(s): ERL-1098 Corrected an issue with the new socket api which could cause a core dump. A race during socket close could cause a core dump (an invalid nif environment free). OTP-16359 Application(s): erts Corrected an issue with the new socket api which could cause a core dump. When multiple accept processes waiting for a connect a connect could cause a core dump. Full runtime dependencies of erts-10.6.1: kernel-6.5.1, sasl-3.3, stdlib-3.5 --------------------------------------------------------------------- --- snmp-5.4.5 ------------------------------------------------------ --------------------------------------------------------------------- The snmp-5.4.5 application can be applied independently of other applications on a full OTP 22 installation. --- Improvements and New Features --- OTP-16349 Application(s): snmp Related Id(s): ERIERL-444 Its now possible to remove selected varbinds (from the final message) when sending a notification. This is done by setting the 'value' (in the varbind(s) of the varbinds list) to '?NOTIFICATION_IGNORE_VB_VALUE'. OTP-16360 Application(s): snmp Related Id(s): ERIERL-451 Its now possible to specify that an oid shall be "truncated" (trailing ".0" to be removed) when sending an notification. Full runtime dependencies of snmp-5.4.5: crypto-3.3, erts-6.0, kernel-3.0, mnesia-4.12, runtime_tools-1.8.14, stdlib-2.5 --------------------------------------------------------------------- --- ssl-9.5.1 ------------------------------------------------------- --------------------------------------------------------------------- The ssl-9.5.1 application can be applied independently of other applications on a full OTP 22 installation. --- Fixed Bugs and Malfunctions --- OTP-16357 Application(s): ssl Related Id(s): ERL-1166 Add missing alert handling clause for TLS record handling. Could sometimes cause confusing error behaviors of TLS connections. OTP-16361 Application(s): ssl Fix handling of ssl:recv that happens during a renegotiation. Using the passive receive function ssl:recv/[2,3] during a renegotiation would fail the connection with unexpected msg. Full runtime dependencies of ssl-9.5.1: crypto-4.2, erts-10.0, inets-5.10.7, kernel-6.0, public_key-1.5, stdlib-3.5 --------------------------------------------------------------------- --------------------------------------------------------------------- --------------------------------------------------------------------- From essen@REDACTED Thu Dec 19 12:14:21 2019 From: essen@REDACTED (=?UTF-8?Q?Lo=c3=afc_Hoguin?=) Date: Thu, 19 Dec 2019 12:14:21 +0100 Subject: Patch Package OTP 22.2.1 Released In-Reply-To: <20191219110555.E24312546A3@hel.cslab.ericsson.net> References: <20191219110555.E24312546A3@hel.cslab.ericsson.net> Message-ID: <3effe814-232f-2f18-694a-e0f75846e735@ninenines.eu> Thanks. By the way OTP-22.2 is missing from this page: https://www.erlang.org/downloads Cheers, On 19/12/2019 12:05, Erlang/OTP wrote: > Patch Package: OTP 22.2.1 > Git Tag: OTP-22.2.1 > Date: 2019-12-19 > Trouble Report Id: OTP-16314, OTP-16349, OTP-16357, OTP-16359, > OTP-16360, OTP-16361 > Seq num: ERIERL-444, ERIERL-451, ERL-1098, ERL-1166 > System: OTP > Release: 22 > Application: erts-10.6.1, snmp-5.4.5, ssl-9.5.1 > Predecessor: OTP 22.2 > > Check out the git tag OTP-22.2.1, and build a full OTP system > including documentation. Apply one or more applications from this > build as patches to your installation using the 'otp_patch_apply' > tool. For information on install requirements, see descriptions for > each application version below. > > --------------------------------------------------------------------- > --- erts-10.6.1 ----------------------------------------------------- > --------------------------------------------------------------------- > > Note! The erts-10.6.1 application *cannot* be applied independently > of other applications on an arbitrary OTP 22 installation. > > On a full OTP 22 installation, also the following runtime > dependency has to be satisfied: > -- kernel-6.5.1 (first satisfied in OTP 22.2) > > > --- Fixed Bugs and Malfunctions --- > > OTP-16314 Application(s): erts > Related Id(s): ERL-1098 > > Corrected an issue with the new socket api which could > cause a core dump. A race during socket close could > cause a core dump (an invalid nif environment free). > > > OTP-16359 Application(s): erts > > Corrected an issue with the new socket api which could > cause a core dump. When multiple accept processes > waiting for a connect a connect could cause a core > dump. > > > Full runtime dependencies of erts-10.6.1: kernel-6.5.1, sasl-3.3, > stdlib-3.5 > > > --------------------------------------------------------------------- > --- snmp-5.4.5 ------------------------------------------------------ > --------------------------------------------------------------------- > > The snmp-5.4.5 application can be applied independently of other > applications on a full OTP 22 installation. > > --- Improvements and New Features --- > > OTP-16349 Application(s): snmp > Related Id(s): ERIERL-444 > > Its now possible to remove selected varbinds (from the > final message) when sending a notification. This is > done by setting the 'value' (in the varbind(s) of the > varbinds list) to '?NOTIFICATION_IGNORE_VB_VALUE'. > > > OTP-16360 Application(s): snmp > Related Id(s): ERIERL-451 > > Its now possible to specify that an oid shall be > "truncated" (trailing ".0" to be removed) when sending > an notification. > > > Full runtime dependencies of snmp-5.4.5: crypto-3.3, erts-6.0, > kernel-3.0, mnesia-4.12, runtime_tools-1.8.14, stdlib-2.5 > > > --------------------------------------------------------------------- > --- ssl-9.5.1 ------------------------------------------------------- > --------------------------------------------------------------------- > > The ssl-9.5.1 application can be applied independently of other > applications on a full OTP 22 installation. > > --- Fixed Bugs and Malfunctions --- > > OTP-16357 Application(s): ssl > Related Id(s): ERL-1166 > > Add missing alert handling clause for TLS record > handling. Could sometimes cause confusing error > behaviors of TLS connections. > > > OTP-16361 Application(s): ssl > > Fix handling of ssl:recv that happens during a > renegotiation. Using the passive receive function > ssl:recv/[2,3] during a renegotiation would fail the > connection with unexpected msg. > > > Full runtime dependencies of ssl-9.5.1: crypto-4.2, erts-10.0, > inets-5.10.7, kernel-6.0, public_key-1.5, stdlib-3.5 > > > --------------------------------------------------------------------- > --------------------------------------------------------------------- > --------------------------------------------------------------------- > -- Lo?c Hoguin https://ninenines.eu From publicityifl@REDACTED Fri Dec 20 14:52:30 2019 From: publicityifl@REDACTED (Jurriaan Hage) Date: Fri, 20 Dec 2019 05:52:30 -0800 Subject: Third call for draft papers for TFPIE 2020 (Trends in Functional Programming in Education) Message-ID: Hello, Please, find below the third call for draft papers for TFPIE 2020. Please forward these to anyone you think may be interested. Apologies for any duplicates you may receive. best regards, Jurriaan Hage Chair of TFPIE 2020 ======================================================================== TFPIE 2020 Call for papers http://www.staff.science.uu.nl/~hage0101/tfpie2020/index.html February 12th 2020, Krakow, Poland (co-located with TFP 2020 and Lambda Days) *NEW* Invited Speaker We are happy to announce the invited speaker for TFPIE 2020, Thorsten Altenkirch, who also speaks at Lambda Days. At TFPIE 2020 he shall be talking about his new book, Conceptual Programming With Python. *NEW* Registration This year TFPIE takes place outside of the Lambda Days/TFP organisation, although it takes place near their location. This means you do need to register separately for TFPIE; it also means you can register for TFPIE without registering for TFP/LambdaDays, and vice versa. Registration is mandatory for at least one author of every paper that is presented at the workshop. Only papers that have been presented at TFPIE may be submitted to the post-reviewing process. Registration is 25 euro per person. TFPIE 2020 welcomes submissions describing techniques used in the classroom, tools used in and/or developed for the classroom and any creative use of functional programming (FP) to aid education in or outside Computer Science. Topics of interest include, but are not limited to: FP and beginning CS students FP and Computational Thinking FP and Artificial Intelligence FP in Robotics FP and Music Advanced FP for undergraduates FP in graduate education Engaging students in research using FP FP in Programming Languages FP in the high school curriculum FP as a stepping stone to other CS topics FP and Philosophy The pedagogy of teaching FP FP and e-learning: MOOCs, automated assessment etc. Best Lectures - more details below In addition to papers, we are requesting best lecture presentations. What's your best lecture topic in an FP related course? Do you have a fun way to present FP concepts to novices or perhaps an especially interesting presentation of a difficult topic? In either case, please consider sharing it. Best lecture topics will be selected for presentation based on a short abstract describing the lecture and its interest to TFPIE attendees. The length of the presentation should be comparable to that of a paper. On top of the lecture itself, the presentation can also provide commentary on the lecture. Submissions Potential presenters are invited to submit an extended abstract (4-6 pages) or a draft paper (up to 20 pages) in EPTCS style. The authors of accepted presentations will have their preprints and their slides made available on the workshop's website. Papers and abstracts can be submitted via easychair at the following link: https://easychair.org/conferences/?conf=tfpie2020 . After the workshop, presenters will be invited to submit (a revised version of) their article for review. The PC will select the best articles that will be published in the Electronic Proceedings in Theoretical Computer Science (EPTCS). Articles rejected for presentation and extended abstracts will not be formally reviewed by the PC. Dates Submission deadline: January 14th 2020, Anywhere on Earth. Notification: January 17th 2020 TFPIE Registration Deadline: January 20th 2020 Workshop: February 12th 2020 Submission for formal review: April 19th 2020, Anywhere on Earth. Notification of full article: June 6th 2020 Camera ready: July 1st 2020 Program Committee Olaf Chitil - University of Kent Youyou Cong - Tokyo Institute of Technology Marko van Eekelen - Open University of the Netherlands and Radboud University Nijmegen Jurriaan Hage (Chair) - Utrecht University Marco T. Morazan - Seton Hall University, USA Sharon Tuttle - Humboldt State University, USA Janis Voigtlaender - University of Duisburg-Essen Viktoria Zsok - Eotvos Lorand University Note: information on TFP is available at http://www.cse.chalmers.se/~rjmh/tfp/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From frank.muller.erl@REDACTED Mon Dec 23 16:51:08 2019 From: frank.muller.erl@REDACTED (Frank Muller) Date: Mon, 23 Dec 2019 16:51:08 +0100 Subject: New Socket module examples Message-ID: Hi guys I?m playing with the new socket module: https://erlang.org/doc/man/socket.html The idea is to switch my gen_tcp based Web server to socket. Currently, I?m using {active, N} and would like to know if anything equivalent is possible with socket? Are there any examples I can look at? Also, I read somewhere that in R23, gen_tcp will be based on the new socket module. Is this still true? /Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From chalor99@REDACTED Tue Dec 24 11:04:28 2019 From: chalor99@REDACTED (Rareman S) Date: Tue, 24 Dec 2019 17:04:28 +0700 Subject: build erlang otp 22.2 but can't used observer Message-ID: Hi, I need to build wx with erlang but don't know how to merge wx and erlang otp together. -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke@REDACTED Tue Dec 24 16:39:47 2019 From: luke@REDACTED (Luke Bakken) Date: Tue, 24 Dec 2019 07:39:47 -0800 Subject: build erlang otp 22.2 but can't used observer In-Reply-To: References: Message-ID: Hello, Most operating systems include wx packages. What operating system and version of it are you using? On Tue, Dec 24, 2019 at 7:32 AM Rareman S wrote: > > Hi, > > I need to build wx with erlang but don't know how to merge wx and erlang otp together. From okaprinarjaya@REDACTED Wed Dec 25 18:43:12 2019 From: okaprinarjaya@REDACTED (I Gusti Ngurah Oka Prinarjaya) Date: Thu, 26 Dec 2019 00:43:12 +0700 Subject: build erlang otp 22.2 but can't used observer In-Reply-To: References: Message-ID: Hi, You can follow how to install wx at https://erlang.org/doc/installation_guide/INSTALL.html#Advanced-configuration-and-build-of-ErlangOTP_Building_Building-with-wxErlang . I have same problem like yours several days ago and I follow that documentation and it works! You really have to use WX_3_0_BRANCH as already mentioned at the doc. I build Erlang/OTP 22.2 using KERL. Pada tanggal Sel, 24 Des 2019 pukul 22.40 Luke Bakken menulis: > Hello, > > Most operating systems include wx packages. What operating system and > version of it are you using? > > On Tue, Dec 24, 2019 at 7:32 AM Rareman S wrote: > > > > Hi, > > > > I need to build wx with erlang but don't know how to merge wx and erlang > otp together. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Wed Dec 25 18:57:25 2019 From: zxq9@REDACTED (zxq9) Date: Thu, 26 Dec 2019 02:57:25 +0900 Subject: build erlang otp 22.2 but can't used observer In-Reply-To: References: Message-ID: <1f7d956d-ecbb-04ea-2286-238c9aaee1fe@zxq9.com> On 2019/12/24 19:04, Rareman S wrote: > Hi, > > I need to build wx with erlang but don't know how to merge wx and erlang > otp together. I made notes for myself for building with Kerl on Debian/Ubuntu here: http://zxq9.com/archives/1603 The package names here are not 100% universal, but they are a good guide for searching package names on just about any other Linux type system. This set of dependencies will enable WX, but not the docs or Java components. -Craig From vasdeveloper@REDACTED Wed Dec 25 21:46:00 2019 From: vasdeveloper@REDACTED (Theepan) Date: Thu, 26 Dec 2019 02:16:00 +0530 Subject: COWBOY Unicode Support Message-ID: Hi There, cowboy_req:reply/4 fails with {reason,badarg} for erlang:iolist_size when unicode characters are present in the html page. Cowboy version 2.0 What am I missing? Thanks, Theepan -------------- next part -------------- An HTML attachment was scrubbed... URL: From vasdeveloper@REDACTED Wed Dec 25 22:23:58 2019 From: vasdeveloper@REDACTED (Theepan) Date: Thu, 26 Dec 2019 02:53:58 +0530 Subject: COWBOY Unicode Support In-Reply-To: References: Message-ID: Oops sorry, the issue is not with COWBOY. Proper iodata() was not passed. Resolved it. Best, Theepan On Thu, Dec 26, 2019 at 2:16 AM Theepan wrote: > Hi There, > > cowboy_req:reply/4 fails with {reason,badarg} for erlang:iolist_size when > unicode characters are present in the html page. > > Cowboy version 2.0 > > What am I missing? > > Thanks, > Theepan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roger@REDACTED Thu Dec 26 20:41:28 2019 From: roger@REDACTED (Roger Lipscombe) Date: Thu, 26 Dec 2019 19:41:28 +0000 Subject: build erlang otp 22.2 but can't used observer In-Reply-To: <1f7d956d-ecbb-04ea-2286-238c9aaee1fe@zxq9.com> References: <1f7d956d-ecbb-04ea-2286-238c9aaee1fe@zxq9.com> Message-ID: On Wed, 25 Dec 2019 at 17:57, zxq9 wrote: > I made notes for myself for building with Kerl on Debian/Ubuntu here: > http://zxq9.com/archives/1603 FWIW, my notes are here: http://blog.differentpla.net/blog/2019/01/30/erlang-build-prerequisites/ -- the package list is not quite the same. From frank.muller.erl@REDACTED Fri Dec 27 01:50:11 2019 From: frank.muller.erl@REDACTED (Frank Muller) Date: Fri, 27 Dec 2019 01:50:11 +0100 Subject: Port locks with high time under LCNT Message-ID: Hi All My custom-made web server which only serves only static files (a la Hugo: https://gohugo.io/) is showing this under LCNT: https://gist.github.com/frankmullerl/008174c6594ca27584ac7f4e6724bee5 Can someone explain how to dig further to understand what?s going on and why these Port locks are taking so much time? /Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From kjellwinblad@REDACTED Fri Dec 27 08:55:33 2019 From: kjellwinblad@REDACTED (Kjell Winblad) Date: Fri, 27 Dec 2019 08:55:33 +0100 Subject: Minimalistic Erlang autocompletion package for Emacs Message-ID: Hi, I want to announce a small Emacs package that I have been working on as I think it might be useful to some of you: https://github.com/kjellwinblad/yaemep YAEMEP contains Emacs minor-modes for completion, auto-updating of the project's TAGS-file, and a menu with useful commands. The minimalistic aspect of YAEMEP is that it has no other dependencies than erlang-mode, and that the escript program should be in the PATH. The README-file of the project contains more information and a list of projects that can extend Emacs erlang-mode similarly. Best regards, Kjell Winblad From zxq9@REDACTED Fri Dec 27 10:40:26 2019 From: zxq9@REDACTED (Craig Everett) Date: Fri, 27 Dec 2019 18:40:26 +0900 Subject: build erlang otp 22.2 but can't used observer In-Reply-To: References: <1f7d956d-ecbb-04ea-2286-238c9aaee1fe@zxq9.com> Message-ID: Roger Lipscombe???????? > On Wed, 25 Dec 2019 at 17:57, zxq9 wrote: > > > I made notes for myself for building with Kerl on Debian/Ubuntu here: > > http://zxq9.com/archives/1603 > > FWIW, my notes are here: > http://blog.differentpla.net/blog/2019/01/30/erlang-build-prerequisites/ > -- the package list is not quite the same. I wonder what the best way to aggregate this sort if information in an easy to navigate way would be? AFAIK we don't have a wiki space for this, and putting one up for such limited scope seems overkill. I wouldn't mind maintaining a page of build-req pages (that is, "do these steps before running kerl or make if you are on system X") for various systems, but I would need people to forward me their notes for different systems and configurations. -------------- next part -------------- An HTML attachment was scrubbed... URL: From okaprinarjaya@REDACTED Fri Dec 27 10:44:29 2019 From: okaprinarjaya@REDACTED (I Gusti Ngurah Oka Prinarjaya) Date: Fri, 27 Dec 2019 16:44:29 +0700 Subject: gen_server locked for some time In-Reply-To: References: Message-ID: Hi, What the form of your bulk data? Is it a CSV that contain millions lines (rows) ? Pada tanggal Rab, 4 Des 2019 pukul 02.59 Roberto Ostinelli < ostinelli@REDACTED> menulis: > Thanks for the tips, Max and Jesper. > In those solutions though how do you guarantee the order of the call? My > main issue is to avoid that the slow process does not override more recent > but faster data chunks. Do you pile them up in a queue in the received > order and treat them after that? > > On Mon, Dec 2, 2019 at 3:57 PM Jesper Louis Andersen < > jesper.louis.andersen@REDACTED> wrote: > >> Another path is to cooperate the bulk write in the process. Write in >> small chunks and go back into the gen_server loop in between those chunks >> being written. You now have progress, but no separate process. >> >> Another useful variant is to have two processes, but having the split >> skewed. You prepare iodata() in the main process, and then send that to the >> other process as a message. This message will be fairly small since large >> binaries will be transferred by reference. The queue in the other process >> acts as a linearizing write buffer so ordering doesn't get messed up. You >> have now moved the bulk write call out of the main process, so it is free >> to do other processing in between. You might even want a protocol between >> the two processes to exert some kind of flow control on the system. >> However, you don't have an even balance between the processes. One is the >> intelligent orchestrator. The other is the worker, taking the block on the >> bulk operation. >> >> Another thing is to improve the observability of the system. Start doing >> measurements on the lag time of the gen_server and plot this in a >> histogram. Measure the amount of data written in the bulk message. This >> gives you some real data to work with. The thing is: if you experience >> blocking in some part of your system, it is likely there is some kind of >> traffic/request pattern which triggers it. Understand that pattern. It is >> often covering for some important behavior among users you didn't think >> about. Anticipation of future uses of the system allows you to be proactive >> about latency problems. >> >> It is some times better to gate the problem by limiting what a >> user/caller/request is allowed to do. As an example, you can reject large >> requests to the system and demand they happen cooperatively between a >> client and a server. This slows down the client because they have to wait >> for a server response until they can issue the next request. If the >> internet is in between, you just injected an artificial RTT + server >> processing in between calls, implicitly slowing the client down. >> >> >> On Fri, Nov 29, 2019 at 11:47 PM Roberto Ostinelli >> wrote: >> >>> All, >>> I have a gen_server that in periodic intervals becomes busy, eventually >>> over 10 seconds, while writing bulk incoming data. This gen_server also >>> receives smaller individual data updates. >>> >>> I could offload the bulk writing routine to separate processes but the >>> smaller individual data updates would then be processed before the bulk >>> processing is over, hence generating an incorrect scenario where smaller >>> more recent data gets overwritten by the bulk processing. >>> >>> I'm trying to see how to solve the fact that all the gen_server calls >>> during the bulk update would timeout. >>> >>> Any ideas of best practices? >>> >>> Thank you, >>> r. >>> >> >> >> -- >> J. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roger@REDACTED Fri Dec 27 11:00:00 2019 From: roger@REDACTED (Roger Lipscombe) Date: Fri, 27 Dec 2019 10:00:00 +0000 Subject: build erlang otp 22.2 but can't used observer In-Reply-To: References: <1f7d956d-ecbb-04ea-2286-238c9aaee1fe@zxq9.com> Message-ID: On Fri, 27 Dec 2019 at 09:40, Craig Everett wrote: > I wonder what the best way to aggregate this sort if information in an easy to navigate way would be? AFAIK we don't have a wiki space for this, and putting one up for such limited scope seems overkill. Seems like it makes sense for kerl to manage this. For example, ruby-install can identify which platform you're on and will (by default) install the needed dependencies; see https://github.com/postmodern/ruby-install/blob/master/share/ruby-install/ruby/dependencies.txt From essen@REDACTED Fri Dec 27 11:07:16 2019 From: essen@REDACTED (=?UTF-8?Q?Lo=c3=afc_Hoguin?=) Date: Fri, 27 Dec 2019 11:07:16 +0100 Subject: Conditional iolist_to_binary? Message-ID: <1f16422d-04cc-40c4-2522-1d0e751ef62d@ninenines.eu> Hello, I seem to be getting a small but noticeable performance improvement when I change the following: Value = iolist_to_binary(Value0) Into the following: Value = if is_binary(Value0) -> Value0; true -> iolist_to_binary(Value0) end At least when the input data is mostly binaries. Anyone else seeing this? The only difference between is_binary and iolist_to_binary seems to be that is_binary calls is_binary/binary_bitsize on the same if expression while iolist_to_binary has two if statements. What gives? Cheers, -- Lo?c Hoguin https://ninenines.eu From chalor99@REDACTED Fri Dec 27 11:39:32 2019 From: chalor99@REDACTED (Rareman S) Date: Fri, 27 Dec 2019 17:39:32 +0700 Subject: build erlang otp 22.2 but can't used observer In-Reply-To: References: Message-ID: I build on Unix system (Solaris) can?t used kerl. Sent from my iPhone > On 27 Dec BE 2562, at 17:00, Roger Lipscombe wrote: > > ?On Fri, 27 Dec 2019 at 09:40, Craig Everett wrote: > >> I wonder what the best way to aggregate this sort if information in an easy to navigate way would be? AFAIK we don't have a wiki space for this, and putting one up for such limited scope seems overkill. > > Seems like it makes sense for kerl to manage this. For example, > ruby-install can identify which platform you're on and will (by > default) install the needed dependencies; see > https://github.com/postmodern/ruby-install/blob/master/share/ruby-install/ruby/dependencies.txt From fchschneider@REDACTED Fri Dec 27 12:13:24 2019 From: fchschneider@REDACTED (Schneider) Date: Fri, 27 Dec 2019 12:13:24 +0100 Subject: build erlang otp 22.2 but can't used observer In-Reply-To: References: Message-ID: Maybe the authors of https://adoptingerlang.org/docs/development/setup/ would be kind enough to add such details? > Op 27 dec. 2019 om 10:40 heeft Craig Everett het volgende geschreven: > > ? > Roger Lipscombe???????? > > > On Wed, 25 Dec 2019 at 17:57, zxq9 wrote: > > > > > I made notes for myself for building with Kerl on Debian/Ubuntu here: > > > http://zxq9.com/archives/1603 > > > > FWIW, my notes are here: > > http://blog.differentpla.net/blog/2019/01/30/erlang-build-prerequisites/ > > -- the package list is not quite the same. > > I wonder what the best way to aggregate this sort if information in an easy to navigate way would be? AFAIK we don't have a wiki space for this, and putting one up for such limited scope seems overkill. > > I wouldn't mind maintaining a page of build-req pages (that is, "do these steps before running kerl or make if you are on system X") for various systems, but I would need people to forward me their notes for different systems and configurations. -------------- next part -------------- An HTML attachment was scrubbed... URL: From okaprinarjaya@REDACTED Fri Dec 27 12:47:06 2019 From: okaprinarjaya@REDACTED (I Gusti Ngurah Oka Prinarjaya) Date: Fri, 27 Dec 2019 18:47:06 +0700 Subject: Nine Nines The Erlanger Playbook Message-ID: Hi Lo?c Hoguin, Woww.. I am very interested to buy your book. But when full version of this book will available? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From dszoboszlay@REDACTED Fri Dec 27 22:54:29 2019 From: dszoboszlay@REDACTED (=?UTF-8?Q?D=C3=A1niel_Szoboszlay?=) Date: Fri, 27 Dec 2019 22:54:29 +0100 Subject: Conditional iolist_to_binary? In-Reply-To: <1f16422d-04cc-40c4-2522-1d0e751ef62d@ninenines.eu> References: <1f16422d-04cc-40c4-2522-1d0e751ef62d@ninenines.eu> Message-ID: Hi, My guess the reason is that the iolist_to_binary(Value0) call compiles into a call_ext, while the is_binary(Value0) to a test assembly instruction. This means all live X registers have to be saved to the heap (and restored later) in the first case. Inlining the is_binary check can save you that (and a somewhat expensive external function call) when Value0 is already a binary. But how much a difference this is depends on how many live X registers you have at that point. Maybe try compiling the actual module with erlc -S and compare the generated assembly code with and without the if statement? Cheers, Daniel On Fri, 27 Dec 2019 at 11:07, Lo?c Hoguin wrote: > Hello, > > I seem to be getting a small but noticeable performance improvement when > I change the following: > > Value = iolist_to_binary(Value0) > > Into the following: > > Value = if > is_binary(Value0) -> Value0; > true -> iolist_to_binary(Value0) > end > > At least when the input data is mostly binaries. Anyone else seeing this? > > The only difference between is_binary and iolist_to_binary seems to be > that is_binary calls is_binary/binary_bitsize on the same if expression > while iolist_to_binary has two if statements. > > What gives? > > Cheers, > > -- > Lo?c Hoguin > https://ninenines.eu > -------------- next part -------------- An HTML attachment was scrubbed... URL: From by@REDACTED Sat Dec 28 08:25:59 2019 From: by@REDACTED (by) Date: Sat, 28 Dec 2019 15:25:59 +0800 Subject: Dialyzer does not report errors when module having abstraction violation on opaque type. Message-ID: <425683FF-06D7-4B2A-A73C-C58E905E0BF9@meetlost.com> Hi, I am trying to do some exercises on opaque type, the dialyzer should report error on below code, but it does not. %% opaque1.erl -module(opaque1). -export_type([test_text/0]). -opaque test_text() :: [{atom(), number()}]. -export([make_text/0]). -spec make_text() -> test_text(). make_text() -> [{a,1}, {c,3}]. %% opaque2.erl -module(opaque2). -export([test/0]). test() -> X = opaque1:make_text(), [F || {F, _} <- X]. % This violates the abstraction of opaque type from module opaque1. Run dialyzer on these two modules will produce: %%%% MacBookPro:dialyzer by$ dialyzer opaque1.erl Checking whether the PLT /Users/by/.dialyzer_plt is up-to-date... yes Proceeding with analysis... done in 0m0.12s done (passed successfully) MacBookPro:dialyzer by$ %%%% %%%% MacBookPro:dialyzer by$ dialyzer opaque2.erl Checking whether the PLT /Users/by/.dialyzer_plt is up-to-date... yes Proceeding with analysis... Unknown functions: opaque1:make_text/0 done in 0m0.13s done (passed successfully) MacBookPro:dialyzer by$ %%%% My Erlang/OTP version is: OTP-22.1.4 Am I missing something? Kind Regards, Yao -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Sat Dec 28 09:43:45 2019 From: zxq9@REDACTED (zxq9) Date: Sat, 28 Dec 2019 17:43:45 +0900 Subject: Dialyzer does not report errors when module having abstraction violation on opaque type. In-Reply-To: <425683FF-06D7-4B2A-A73C-C58E905E0BF9@meetlost.com> References: <425683FF-06D7-4B2A-A73C-C58E905E0BF9@meetlost.com> Message-ID: <06e44ef2-177d-e1f0-0ccd-be17794daec2@zxq9.com> On 2019/12/28 16:25, by wrote: > %% opaque1.erl > -module(opaque1). > -export_type([test_text/0]). > -opaquetest_text() :: [{atom(), number()}]. > -export([make_text/0]). > > -spec make_text() ->test_text(). > > make_text() -> > [{a,1}, {c,3}]. [snip] > %% opaque2.erl > -module(opaque2). > -export([test/0]). > > test() -> > X = opaque1:make_text(), > [F||{F, _} <- X]. % This violates the abstraction > > Run dialyzer on these two modules will produce: > %%%% > MacBookPro:dialyzer by$ dialyzer opaque1.erl > Checking whether the PLT /Users/by/.dialyzer_plt is up-to-date... yes > Proceeding with analysis... done in 0m0.12s > done (passed successfully) > MacBookPro:dialyzer by$ > %%%% > > %%%% > MacBookPro:dialyzer by$ dialyzer opaque2.erl > Checking whether the PLT /Users/by/.dialyzer_plt is up-to-date... yes > Proceeding with analysis... > Unknown functions: > opaque1:make_text/0 > done in 0m0.13s > done (passed successfully) > MacBookPro:dialyzer by$ > %%%% > > Am I missing something? Two things: 1- You need to dialyze BOTH modules in the same run or else Dialyzer cannot see the typespecs in opaque1.erl while it is evaluating opaque2.erl 2- You can write code that violates opaque types, though I believe Dialyzer will issue a warning about it if you evaluate both at once. -Craig From by@REDACTED Sat Dec 28 10:22:15 2019 From: by@REDACTED (by) Date: Sat, 28 Dec 2019 17:22:15 +0800 Subject: Dialyzer does not report errors when module having abstraction violation on opaque type. In-Reply-To: <06e44ef2-177d-e1f0-0ccd-be17794daec2@zxq9.com> References: <425683FF-06D7-4B2A-A73C-C58E905E0BF9@meetlost.com> <06e44ef2-177d-e1f0-0ccd-be17794daec2@zxq9.com> Message-ID: <468EB0D2-929D-4E4A-8C99-C6420459831B@meetlost.com> Yes, you are right. I get the warning after evaluate both modules at the same time. The warning is: %%%% MacBookPro:dialyzer by$ dialyzer opaque1.erl opaque2.erl Checking whether the PLT /Users/by/.dialyzer_plt is up-to-date... yes Proceeding with analysis... opaque2.erl:5: Function test/0 has no local return opaque2.erl:7: The created fun has no local return opaque2.erl:7: The attempt to match a term of type opaque1:test_text() against the pattern [{F, _} | _] breaks the opacity of the term opaque2.erl:7: The attempt to match a term of type opaque1:test_text() against the pattern [] breaks the opacity of the term opaque2.erl:7: Fun application with arguments (X :: opaque1:test_text()) will never return since it differs in the 1st argument from the success typing arguments: ([any()]) done in 0m0.17s done (warnings were emitted) MacBookPro:dialyzer by$ %%%% Thank you! Kind Regards, Yao > ? 2019?12?28??16:43?zxq9 ??? > > On 2019/12/28 16:25, by wrote: > > %% opaque1.erl > > -module(opaque1). > > -export_type([test_text/0]). > > -opaquetest_text() :: [{atom(), number()}]. > > -export([make_text/0]). > > > > -spec make_text() ->test_text(). > > > > make_text() -> > > [{a,1}, {c,3}]. > > [snip] > > > %% opaque2.erl > > -module(opaque2). > > -export([test/0]). > > > > test() -> > > X = opaque1:make_text(), > > [F||{F, _} <- X]. % This violates the abstraction > > > > Run dialyzer on these two modules will produce: > > %%%% > > MacBookPro:dialyzer by$ dialyzer opaque1.erl > > Checking whether the PLT /Users/by/.dialyzer_plt is up-to-date... yes > > Proceeding with analysis... done in 0m0.12s > > done (passed successfully) > > MacBookPro:dialyzer by$ > > %%%% > > > > %%%% > > MacBookPro:dialyzer by$ dialyzer opaque2.erl > > Checking whether the PLT /Users/by/.dialyzer_plt is up-to-date... yes > > Proceeding with analysis... > > Unknown functions: > > opaque1:make_text/0 > > done in 0m0.13s > > done (passed successfully) > > MacBookPro:dialyzer by$ > > %%%% > > > > Am I missing something? > Two things: > 1- You need to dialyze BOTH modules in the same run or else Dialyzer cannot see the typespecs in opaque1.erl while it is evaluating opaque2.erl > 2- You can write code that violates opaque types, though I believe Dialyzer will issue a warning about it if you evaluate both at once. > > -Craig -------------- next part -------------- An HTML attachment was scrubbed... URL: From essen@REDACTED Sat Dec 28 11:16:09 2019 From: essen@REDACTED (=?UTF-8?Q?Lo=c3=afc_Hoguin?=) Date: Sat, 28 Dec 2019 11:16:09 +0100 Subject: Conditional iolist_to_binary? In-Reply-To: References: <1f16422d-04cc-40c4-2522-1d0e751ef62d@ninenines.eu> Message-ID: <7549621e-26cf-13ce-198f-7e5db0a9bc06@ninenines.eu> That was it, thanks. And more importantly thanks for the explanations! Interesting... On 27/12/2019 22:54, D?niel Szoboszlay wrote: > Hi, > > My guess the reason is that the iolist_to_binary(Value0) call compiles > into a call_ext, while the is_binary(Value0) to a test assembly > instruction. This means all live X registers have to be saved to the > heap (and restored later) in the first case. Inlining the is_binary > check can save you that (and a somewhat expensive external function > call) when Value0 is already a binary. But how much a difference?this is > depends on how many live X registers you have at that point. Maybe try > compiling the actual module with erlc -S and compare the generated > assembly code with and without the if statement? > > Cheers, > Daniel > > On Fri, 27 Dec 2019 at 11:07, Lo?c Hoguin > wrote: > > Hello, > > I seem to be getting a small but noticeable performance improvement > when > I change the following: > > Value = iolist_to_binary(Value0) > > Into the following: > > Value = if > ? ? ?is_binary(Value0) -> Value0; > ? ? ?true -> iolist_to_binary(Value0) > end > > At least when the input data is mostly binaries. Anyone else seeing > this? > > The only difference between is_binary and iolist_to_binary seems to be > that is_binary calls is_binary/binary_bitsize on the same if expression > while iolist_to_binary has two if statements. > > What gives? > > Cheers, > > -- > Lo?c Hoguin > https://ninenines.eu > -- Lo?c Hoguin https://ninenines.eu From chalor99@REDACTED Mon Dec 30 10:04:59 2019 From: chalor99@REDACTED (Rareman S) Date: Mon, 30 Dec 2019 16:04:59 +0700 Subject: build erlang otp 22.2 but can't used observer In-Reply-To: References: Message-ID: Hi, Here is my pre-build with erlang OTP22.2 cd '/export/home/itmxadm/otp-OTP-22.2' ./configure CC=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc CXX=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/CC CFLAGS=-O CXXFLAGS=-g === Running configure in /export/home/itmxadm/otp-OTP-22.2/erts === ./configure 'CC=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc' 'CXX=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/CC' 'CXXFLAGS=-g' CFLAGS='-O' --disable-option-checking --cache-file=/dev/null --srcdir="/export/home/itmxadm/otp-OTP-22.2/erts" checking build system type... sparc-sun-solaris2.11 checking host system type... sparc-sun-solaris2.11 checking for gcc... /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... no checking whether /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc accepts -g... yes checking for /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc option to accept ISO C89... none needed checking for library containing strerror... none required checking OTP release... 22 checking OTP version... 22.2 checking for gcc... (cached) /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc checking whether we are using the GNU C compiler... (cached) no checking whether /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc accepts -g... (cached) yes checking for /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc option to accept ISO C89... (cached) none needed checking for mixed cygwin or msys and native VC++ environment... no checking for mixed cygwin and native MinGW environment... no checking if we mix cygwin with any native compiler... no checking if we mix msys with another native compiler... no checking for getconf... getconf checking for large file support CFLAGS... -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 checking for large file support LDFLAGS... none checking for large file support LIBS... none checking CFLAGS for -O switch... yes checking whether /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc accepts -fprofile-generate -Werror...... no checking whether /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc accepts -fprofile-use -Werror...... no checking whether /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc accepts -fprofile-use -fprofile-correction -Werror...... no checking whether /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc accepts -fprofile-instr-generate -Werror...... no checking whether to do PGO of erts... no checking how to run the C preprocessor... /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc -E checking for grep that handles long lines and -e... /usr/bin/ggrep checking for egrep... /usr/bin/ggrep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking size of void *... 4 checking target hardware architecture... ultrasparc checking whether compilation mode forces ARCH adjustment... no: ARCH is ultrasparc checking if VM has to be linked with Carbon framework... no checking for mkdir... /bin/mkdir checking for cp... /bin/cp checking if we are building a sharing-preserving emulator... no checking how to run the C preprocessor... /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc -E checking for ranlib... ranlib checking for bison... bison -y checking for perl5... no checking for perl... /usr/bin/perl checking whether ln -s works... yes checking for ar... ar checking for xsltproc... xsltproc checking for fop... no configure: WARNING: No 'fop' command found: going to generate placeholder PDF files checking for xmllint... xmllint checking for a BSD-compatible install... /usr/bin/ginstall -c checking how to create a directory including parents... /usr/bin/ginstall -c -d checking for extra flags needed to export symbols... none none checking for sin in -lm... yes checking for dlopen in -ldl... yes checking for main in -linet... no checking for openpty in -lutil... no checking for native win32 threads... no checking for pthread_create in -lpthread... yes checking pthread.h usability... yes checking pthread.h presence... yes checking for pthread.h... yes checking pthread/mit/pthread.h usability... no checking pthread/mit/pthread.h presence... no checking for pthread/mit/pthread.h... no checking for kstat_open in -lkstat... yes checking for clock_gettime in -lrt... yes checking for clock_gettime(CLOCK_MONOTONIC_RAW, _)... no checking for clock_gettime() with custom monotonic clock type... CLOCK_HIGHRES checking for clock_getres... yes checking for clock_get_attributes... no checking for gethrtime... yes checking for mach clock_get_time() with monotonic clock type... no checking for pthread.h... (cached) yes checking for pthread/mit/pthread.h... (cached) no checking sched.h usability... yes checking sched.h presence... yes checking for sched.h... yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking for usable PTHREAD_STACK_MIN... no checking for pthread_spin_lock... yes checking for sched_yield... yes checking whether sched_yield() returns an int... yes checking for pthread_yield... no checking for pthread_rwlock_init... yes checking for pthread_rwlockattr_setkind_np... no checking for pthread_attr_setguardsize... yes checking whether pthread_cond_timedwait() can use the monotonic clock CLOCK_HIGHRES for timeout... yes checking for Linux futexes... no checking for pthread_setname_np... linux checking for pthread_getname_np... linux checking size of short... 2 checking size of int... 4 checking size of long... 4 checking size of long long... 8 checking size of __int128_t... 0 checking for a working __sync_synchronize()... no checking for 32-bit __sync_add_and_fetch()... no checking for 64-bit __sync_add_and_fetch()... no checking for 128-bit __sync_add_and_fetch()... no checking for 32-bit __sync_fetch_and_and()... no checking for 64-bit __sync_fetch_and_and()... no checking for 128-bit __sync_fetch_and_and()... no checking for 32-bit __sync_fetch_and_or()... no checking for 64-bit __sync_fetch_and_or()... no checking for 128-bit __sync_fetch_and_or()... no checking for 32-bit __sync_val_compare_and_swap()... no checking for 64-bit __sync_val_compare_and_swap()... no checking for 128-bit __sync_val_compare_and_swap()... no checking for 32-bit __atomic_store_n()... no checking for 64-bit __atomic_store_n()... no checking for 128-bit __atomic_store_n()... no checking for 32-bit __atomic_load_n()... no checking for 64-bit __atomic_load_n()... no checking for 128-bit __atomic_load_n()... no checking for 32-bit __atomic_add_fetch()... no checking for 64-bit __atomic_add_fetch()... no checking for 128-bit __atomic_add_fetch()... no checking for 32-bit __atomic_fetch_and()... no checking for 64-bit __atomic_fetch_and()... no checking for 128-bit __atomic_fetch_and()... no checking for 32-bit __atomic_fetch_or()... no checking for 64-bit __atomic_fetch_or()... no checking for 128-bit __atomic_fetch_or()... no checking for 32-bit __atomic_compare_exchange_n()... no checking for 64-bit __atomic_compare_exchange_n()... no checking for 128-bit __atomic_compare_exchange_n()... no checking for a usable libatomic_ops implementation... no checking whether default stack size should be modified... no checking size of void *... (cached) 4 checking size of int... (cached) 4 checking size of long... (cached) 4 checking size of long long... (cached) 8 checking size of __int64... 0 checking size of __int128_t... (cached) 0 checking whether byte ordering is bigendian... yes checking whether double word ordering is middle-endian... no checking for posix_fadvise... yes checking for closefrom... yes checking linux/falloc.h usability... no checking linux/falloc.h presence... no checking for linux/falloc.h... no checking whether fallocate() works... no checking whether posix_fallocate() works... yes checking whether lock checking should be enabled... no checking whether lock counters should be enabled... no checking for kstat_open in -lkstat... (cached) yes checking for tgetent in -ltinfo... no checking for tgetent in -lncurses... yes checking for wcwidth... yes checking for zlib 1.2.5 or higher... yes checking for zlib inflateGetDictionary presence... checking for library containing inflateGetDictionary... none required checking for localtime_r... yes checking for strftime... yes checking for connect... no checking for main in -lsocket... yes checking for gethostbyname... no checking for main in -lnsl... yes checking for gethostbyname_r... no checking for working posix_openpt implementation... yes checking if netdb.h requires netinet/in.h to be previously included... yes checking for socklen_t... yes checking for h_errno declaration in netdb.h... yes checking for dirent.h that defines DIR... yes checking for library containing opendir... none required checking for ANSI C header files... (cached) yes checking for sys/wait.h that is POSIX.1 compatible... yes checking whether time.h and sys/time.h may both be included... yes checking fcntl.h usability... yes checking fcntl.h presence... yes checking for fcntl.h... yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking for unistd.h... (cached) yes checking syslog.h usability... yes checking syslog.h presence... yes checking for syslog.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking ieeefp.h usability... yes checking ieeefp.h presence... yes checking for ieeefp.h... yes checking for sys/types.h... (cached) yes checking sys/stropts.h usability... yes checking sys/stropts.h presence... yes checking for sys/stropts.h... yes checking sys/sysctl.h usability... no checking sys/sysctl.h presence... no checking for sys/sysctl.h... no checking sys/ioctl.h usability... yes checking sys/ioctl.h presence... yes checking for sys/ioctl.h... yes checking for sys/time.h... (cached) yes checking sys/uio.h usability... yes checking sys/uio.h presence... yes checking for sys/uio.h... yes checking sys/mman.h usability... yes checking sys/mman.h presence... yes checking for sys/mman.h... yes checking sys/socket.h usability... yes checking sys/socket.h presence... yes checking for sys/socket.h... yes checking sys/sockio.h usability... yes checking sys/sockio.h presence... yes checking for sys/sockio.h... yes checking sys/socketio.h usability... no checking sys/socketio.h presence... no checking for sys/socketio.h... no checking net/errno.h usability... no checking net/errno.h presence... no checking for net/errno.h... no checking malloc.h usability... yes checking malloc.h presence... yes checking for malloc.h... yes checking arpa/nameser.h usability... yes checking arpa/nameser.h presence... yes checking for arpa/nameser.h... yes checking libdlpi.h usability... yes checking libdlpi.h presence... yes checking for libdlpi.h... yes checking pty.h usability... no checking pty.h presence... no checking for pty.h... no checking util.h usability... no checking util.h presence... no checking for util.h... no checking libutil.h usability... no checking libutil.h presence... no checking for libutil.h... no checking utmp.h usability... yes checking utmp.h presence... yes checking for utmp.h... yes checking langinfo.h usability... yes checking langinfo.h presence... yes checking for langinfo.h... yes checking poll.h usability... yes checking poll.h presence... yes checking for poll.h... yes checking sdkddkver.h usability... no checking sdkddkver.h presence... no checking for sdkddkver.h... no checking for struct ifreq.ifr_hwaddr... no checking for struct ifreq.ifr_enaddr... yes checking for dlpi_open in -ldlpi... yes checking sys/resource.h usability... yes checking sys/resource.h presence... yes checking for sys/resource.h... yes checking whether getrlimit is declared... yes checking whether setrlimit is declared... yes checking whether RLIMIT_STACK is declared... yes checking for getrusage... yes checking sys/event.h usability... no checking sys/event.h presence... no checking for sys/event.h... no checking sys/epoll.h usability... no checking sys/epoll.h presence... no checking for sys/epoll.h... no checking sys/devpoll.h usability... yes checking sys/devpoll.h presence... yes checking for sys/devpoll.h... yes checking sys/timerfd.h usability... no checking sys/timerfd.h presence... no checking for sys/timerfd.h... no checking netpacket/packet.h usability... yes checking netpacket/packet.h presence... yes checking for netpacket/packet.h... yes checking for netinet/sctp.h... yes checking for sctp_bindx... no checking for sctp_peeloff... no checking for sctp_getladdrs... no checking for sctp_freeladdrs... no checking for sctp_getpaddrs... no checking for sctp_freepaddrs... no checking whether SCTP_UNORDERED is declared... yes checking whether SCTP_ADDR_OVER is declared... yes checking whether SCTP_ABORT is declared... yes checking whether SCTP_EOF is declared... yes checking whether SCTP_SENDALL is declared... yes checking whether SCTP_ADDR_CONFIRMED is declared... no checking whether SCTP_DELAYED_ACK_TIME is declared... no checking whether SCTP_EMPTY is declared... no checking whether SCTP_UNCONFIRMED is declared... yes checking whether SCTP_CLOSED is declared... yes checking whether SCTPS_IDLE is declared... yes checking whether SCTP_BOUND is declared... yes checking whether SCTPS_BOUND is declared... yes checking whether SCTP_LISTEN is declared... yes checking whether SCTPS_LISTEN is declared... yes checking whether SCTP_COOKIE_WAIT is declared... yes checking whether SCTPS_COOKIE_WAIT is declared... yes checking whether SCTP_COOKIE_ECHOED is declared... yes checking whether SCTPS_COOKIE_ECHOED is declared... yes checking whether SCTP_ESTABLISHED is declared... yes checking whether SCTPS_ESTABLISHED is declared... yes checking whether SCTP_SHUTDOWN_PENDING is declared... yes checking whether SCTPS_SHUTDOWN_PENDING is declared... yes checking whether SCTP_SHUTDOWN_SENT is declared... yes checking whether SCTPS_SHUTDOWN_SENT is declared... yes checking whether SCTP_SHUTDOWN_RECEIVED is declared... yes checking whether SCTPS_SHUTDOWN_RECEIVED is declared... yes checking whether SCTP_SHUTDOWN_ACK_SENT is declared... yes checking whether SCTPS_SHUTDOWN_ACK_SENT is declared... yes checking for struct sctp_paddrparams.spp_pathmtu... yes checking for struct sctp_paddrparams.spp_sackdelay... no checking for struct sctp_paddrparams.spp_flags... yes checking for struct sctp_remote_error.sre_data... no checking for struct sctp_send_failed.ssf_data... no checking for struct sctp_event_subscribe.sctp_authentication_event... no checking for struct sctp_event_subscribe.sctp_sender_dry_event... no checking for sched.h... (cached) yes checking setns.h usability... no checking setns.h presence... no checking for setns.h... no checking for setns... no checking linux/errqueue.h usability... no checking linux/errqueue.h presence... no checking for linux/errqueue.h... no checking valgrind/valgrind.h usability... no checking valgrind/valgrind.h presence... no checking for valgrind/valgrind.h... no checking for SO_BSDCOMPAT declaration... no checking for INADDR_LOOPBACK in netinet/in.h... yes checking for sys_errlist declaration in stdio.h or errno.h... no checking if windows.h includes winsock2.h... no checking for an ANSI C-conforming const... yes checking return type of signal handlers... void checking for off_t... yes checking for pid_t... yes checking for size_t... yes checking whether struct tm is in sys/time.h or time.h... time.h checking whether struct sockaddr has sa_len field... no checking for struct exception (and matherr function)... yes checking size of char... 1 checking size of short... (cached) 2 checking size of int... (cached) 4 checking size of long... (cached) 4 checking size of void *... (cached) 4 checking size of long long... (cached) 8 checking size of size_t... 4 checking size of off_t... 8 checking size of time_t... 4 checking for C compiler 'restrict' support... yes checking whether byte ordering is bigendian... (cached) yes checking whether double word ordering is middle-endian... (cached) no checking for fdatasync... yes checking for library containing fdatasync... none required checking for library containing sendfilev... -lsendfile checking windows.h usability... no checking windows.h presence... no checking for windows.h... no checking winsock2.h usability... no checking winsock2.h presence... no checking for winsock2.h... no checking for ws2tcpip.h... no checking for getaddrinfo... yes checking whether getaddrinfo accepts enough flags... yes checking for getnameinfo... yes checking for getipnodebyname... yes checking for getipnodebyaddr... yes checking for gethostbyname2... no checking for ieee_handler... no checking for fpsetmask... yes checking for finite... yes checking for isnan... yes checking for isinf... no checking for res_gethostbyname... no checking for dlopen... yes checking for pread... yes checking for pwrite... yes checking for memmove... yes checking for strerror... yes checking for strerror_r... yes checking for strncasecmp... yes checking for gethrtime... (cached) yes checking for localtime_r... (cached) yes checking for gmtime_r... yes checking for inet_pton... yes checking for mprotect... yes checking for mmap... yes checking for mremap... no checking for memcpy... yes checking for mallopt... no checking for sbrk... yes checking for _sbrk... yes checking for __sbrk... no checking for brk... yes checking for _brk... yes checking for __brk... no checking for flockfile... yes checking for fstat... yes checking for strlcpy... yes checking for strlcat... yes checking for setsid... yes checking for posix2time... no checking for time2posix... no checking for setlocale... yes checking for nl_langinfo... yes checking for poll... yes checking for mlockall... yes checking for ppoll... yes checking for isfinite... no checking for posix_memalign... yes checking for writev... yes checking whether posix2time is declared... no checking whether time2posix is declared... no checking for vprintf... yes checking for _doprnt... yes checking for conflicting declaration of fread... no checking for putc_unlocked... yes checking for fwrite_unlocked... no checking for openpty... no checking net/if_dl.h usability... yes checking net/if_dl.h presence... yes checking for net/if_dl.h... yes checking ifaddrs.h usability... yes checking ifaddrs.h presence... yes checking for ifaddrs.h... yes checking for netpacket/packet.h... (cached) yes checking sys/un.h usability... yes checking sys/un.h presence... yes checking for sys/un.h... yes checking for getifaddrs... yes checking whether in6addr_any is declared... yes checking whether in6addr_loopback is declared... yes checking whether IN6ADDR_ANY_INIT is declared... yes checking whether IN6ADDR_LOOPBACK_INIT is declared... yes checking whether IPV6_V6ONLY is declared... yes checking for sched_getaffinity/sched_setaffinity... no checking for pset functionality... yes checking for processor_bind functionality... yes checking for cpuset_getaffinity/cpuset_setaffinity... no checking for 'end' symbol... yes checking for '_end' symbol... yes checking if __after_morecore_hook can track malloc()s core memory use... no checking types of sbrk()s return value and argument... void *,intptr_t checking types of brk()s return value and argument... int,void * checking if sbrk()/brk() wrappers can track malloc()s core memory use... no checking for IP version 6 support... yes checking for multicast support... yes checking for clock_gettime in -lrt... (cached) yes checking for clock_gettime() with wall clock type... CLOCK_REALTIME checking for clock_getres... (cached) yes checking for clock_get_attributes... (cached) no checking for gettimeofday... yes checking for mach clock_get_time() with wall clock type... no checking for clock_gettime in -lrt... (cached) yes checking for clock_gettime(CLOCK_MONOTONIC_RAW, _)... (cached) no checking for clock_gettime() with custom monotonic clock type... CLOCK_HIGHRES checking for clock_getres... (cached) yes checking for clock_get_attributes... (cached) no checking for gethrtime... (cached) yes checking for mach clock_get_time() with monotonic clock type... (cached) no checking for clock_gettime in -lrt... (cached) yes checking for clock_gettime(CLOCK_MONOTONIC_RAW, _)... (cached) no checking for clock_gettime() with high resolution monotonic clock type... CLOCK_HIGHRES checking for clock_getres... (cached) yes checking for clock_get_attributes... (cached) no checking for gethrtime... (cached) yes checking for mach clock_get_time() with monotonic clock type... (cached) no checking if gethrvtime works and how to use it... uses ioctl to procfs checking for m4... m4 checking for safe signal delivery... yes checking for unreliable floating point exceptions... unreliable checking whether to redefine FD_SETSIZE... no checking for working poll()... yes checking whether kernel poll support should be enabled... yes; /dev/poll checking whether putenv() stores a copy of the key-value pair... no checking for a compiler that handles jumptables... /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc checking whether the code model is small... yes checking for kstat_open in -lkstat... (cached) yes checking for kvm_open in -lkvm... yes checking for javac.sh... no checking for javac... no checking for guavac... no checking for gcj... no checking for jikes... no checking for bock... no configure: WARNING: Could not find any usable java compiler, will skip: jinterface checking for log2... yes configure: creating ./config.status config.status: creating emulator/sparc-sun-solaris2.11/Makefile config.status: creating epmd/src/sparc-sun-solaris2.11/Makefile config.status: creating etc/common/sparc-sun-solaris2.11/Makefile config.status: creating include/internal/sparc-sun-solaris2.11/ethread.mk config.status: creating include/internal/sparc-sun-solaris2.11/ erts_internal.mk config.status: creating lib_src/sparc-sun-solaris2.11/Makefile config.status: creating ../make/sparc-sun-solaris2.11/otp.mk config.status: creating ../make/make_emakefile config.status: creating ../lib/os_mon/c_src/sparc-sun-solaris2.11/Makefile config.status: creating ../lib/runtime_tools/c_src/sparc-sun-solaris2.11/Makefile config.status: creating ../lib/tools/c_src/sparc-sun-solaris2.11/Makefile config.status: creating ../make/install_dir_data.sh config.status: creating sparc-sun-solaris2.11/config.h config.status: creating include/internal/sparc-sun-solaris2.11/ethread_header_config.h config.status: creating include/sparc-sun-solaris2.11/erl_int_sizes_config.h config.status: include/sparc-sun-solaris2.11/erl_int_sizes_config.h is unchanged === Running configure in /export/home/itmxadm/otp-OTP-22.2/make === ./configure 'CC=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc' 'CXX=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/CC' 'CXXFLAGS=-g' CFLAGS='-O' --disable-option-checking --cache-file=/dev/null --srcdir="/export/home/itmxadm/otp-OTP-22.2/make" Ignoring the --cache-file argument since it can cause the system to be erroneously configured Disabling caching checking build system type... sparc-sun-solaris2.11 checking host system type... sparc-sun-solaris2.11 checking for gcc... /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... no checking whether /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc accepts -g... yes checking for /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc option to accept ISO C89... none needed checking whether we are using the GNU C++ compiler... no checking whether /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/CC accepts -g... yes checking for ld... ld checking for mixed cygwin or msys and native VC++ environment... no checking for mixed cygwin and native MinGW environment... no checking if we mix cygwin with any native compiler... no checking if we mix msys with another native compiler... no checking for env... /bin/env checking for GNU make... yes (gmake) checking for a BSD-compatible install... /usr/bin/ginstall -c checking whether ln -s works... yes checking for ranlib... ranlib checking for perl5... no checking for perl... /usr/bin/perl checking ERTS version... 10.6 checking OTP release... 22 checking OTP version... 22.2 checking how to run the C preprocessor... /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc -E checking for grep that handles long lines and -e... /usr/bin/ggrep checking for egrep... /usr/bin/ggrep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for native win32 threads... no checking for pthread_create in -lpthread... yes checking pthread.h usability... yes checking pthread.h presence... yes checking for pthread.h... yes checking pthread/mit/pthread.h usability... no checking pthread/mit/pthread.h presence... no checking for pthread/mit/pthread.h... no checking if we can add -Wdeclaration-after-statement to DED_WARN_FLAGS (via CFLAGS)... yes checking if we can add -Werror=return-type to DED_WERRORFLAGS (via CFLAGS)... yes checking if we can add -Werror=implicit to DED_WERRORFLAGS (via CFLAGS)... yes checking if we can add -Werror=undef to DED_WERRORFLAGS (via CFLAGS)... yes checking for ld... ld checking for static compiler flags... -Werror=undef -Werror=implicit -Werror=return-type -D_THREAD_SAFE -D_REENTRANT -DPOSIX_THREADS -D_POSIX_PTHREAD_SEMANTICS -DSTATIC_ERLANG_NIF -DSTATIC_ERLANG_DRIVER checking for basic compiler flags for loadable drivers... -O checking for compiler flags for loadable drivers... -Werror=undef -Werror=implicit -Werror=return-type -Wdeclaration-after-statement -Wall -Wstrict-prototypes -Wmissing-prototypes -D_THREAD_SAFE -D_REENTRANT -DPOSIX_THREADS -D_POSIX_PTHREAD_SEMANTICS -O checking for linker for loadable drivers... ld checking for linker flags for loadable drivers... -G checking for 'runtime library path' linker flag... -R configure: creating ./config.status config.status: creating ../Makefile config.status: creating output.mk config.status: creating ../make/sparc-sun-solaris2.11/otp_ded.mk config.status: creating emd2exml config.status: WARNING: '../Makefile.in' seems to ignore the --datarootdir setting === Running configure in /export/home/itmxadm/otp-OTP-22.2/lib/common_test === ./configure 'CC=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc' 'CXX=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/CC' 'CXXFLAGS=-g' CFLAGS='-O' --disable-option-checking --cache-file=/dev/null --srcdir="/export/home/itmxadm/otp-OTP-22.2/lib/common_test" checking build system type... sparc-sun-solaris2.11 checking host system type... sparc-sun-solaris2.11 configure: creating ./config.status config.status: creating priv/sparc-sun-solaris2.11/Makefile === Running configure in /export/home/itmxadm/otp-OTP-22.2/lib/crypto === ./configure 'CC=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc' 'CXX=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/CC' 'CXXFLAGS=-g' CFLAGS='-O' --disable-option-checking --cache-file=/dev/null --srcdir="/export/home/itmxadm/otp-OTP-22.2/lib/crypto" checking build system type... sparc-sun-solaris2.11 checking host system type... sparc-sun-solaris2.11 checking for gcc... /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... no checking whether /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc accepts -g... yes checking for /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc option to accept ISO C89... none needed checking for mixed cygwin or msys and native VC++ environment... no checking for mixed cygwin and native MinGW environment... no checking if we mix cygwin with any native compiler... no checking if we mix msys with another native compiler... no checking how to run the C preprocessor... /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc -E checking for grep that handles long lines and -e... /usr/bin/ggrep checking for egrep... /usr/bin/ggrep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for native win32 threads... no checking for pthread_create in -lpthread... yes checking pthread.h usability... yes checking pthread.h presence... yes checking for pthread.h... yes checking pthread/mit/pthread.h usability... no checking pthread/mit/pthread.h presence... no checking for pthread/mit/pthread.h... no checking if we can add -Wdeclaration-after-statement to DED_WARN_FLAGS (via CFLAGS)... yes checking if we can add -Werror=return-type to DED_WERRORFLAGS (via CFLAGS)... yes checking if we can add -Werror=implicit to DED_WERRORFLAGS (via CFLAGS)... yes checking if we can add -Werror=undef to DED_WERRORFLAGS (via CFLAGS)... yes checking for ld... ld checking for static compiler flags... -Werror=undef -Werror=implicit -Werror=return-type -D_THREAD_SAFE -D_REENTRANT -DPOSIX_THREADS -D_POSIX_PTHREAD_SEMANTICS -DSTATIC_ERLANG_NIF -DSTATIC_ERLANG_DRIVER checking for basic compiler flags for loadable drivers... -O checking for compiler flags for loadable drivers... -Werror=undef -Werror=implicit -Werror=return-type -Wdeclaration-after-statement -Wall -Wstrict-prototypes -Wmissing-prototypes -D_THREAD_SAFE -D_REENTRANT -DPOSIX_THREADS -D_POSIX_PTHREAD_SEMANTICS -O checking for linker for loadable drivers... ld checking for linker flags for loadable drivers... -G checking for 'runtime library path' linker flag... -R checking size of void *... 4 checking for static ZLib to be used by SSL in standard locations... no checking for OpenSSL >= 0.9.8c in standard locations... /usr checking for OpenSSL kerberos 5 support... no checking for ssl runtime library path to use... /usr/lib:/usr/local/lib:/usr/sfw/lib:/opt/local/lib:/usr/pkg/lib:/usr/local/openssl/lib:/usr/lib/openssl/lib:/usr/openssl/lib:/usr/local/ssl/lib:/usr/lib/ssl/lib:/usr/ssl/lib:/lib configure: creating ./config.status config.status: creating c_src/sparc-sun-solaris2.11/Makefile === Running configure in /export/home/itmxadm/otp-OTP-22.2/lib/erl_interface === ./configure 'CC=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc' 'CXX=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/CC' 'CXXFLAGS=-g' CFLAGS='-O' --disable-option-checking --cache-file=/dev/null --srcdir="/export/home/itmxadm/otp-OTP-22.2/lib/erl_interface" checking build system type... sparc-sun-solaris2.11 checking host system type... sparc-sun-solaris2.11 checking for gcc... /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... no checking whether /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc accepts -g... yes checking for /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc option to accept ISO C89... none needed checking how to run the C preprocessor... /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc -E checking for ranlib... ranlib checking for ld.sh... no checking for ld... ld checking for grep that handles long lines and -e... /usr/bin/ggrep checking for egrep... /usr/bin/ggrep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking size of short... 2 checking size of int... 4 checking size of long... 4 checking size of void *... 4 checking size of long long... 8 checking target hardware architecture... ultrasparc checking whether compilation mode forces ARCH adjustment... no: ARCH is ultrasparc checking for unaligned word access... no checking for ar... ar checking for a BSD-compatible install... /usr/bin/ginstall -c checking how to create a directory including parents... /usr/bin/ginstall -c -d checking for gethostbyname in -lnsl... yes checking for getpeername in -lsocket... yes checking for ANSI C header files... (cached) yes checking for sys/wait.h that is POSIX.1 compatible... yes checking arpa/inet.h usability... yes checking arpa/inet.h presence... yes checking for arpa/inet.h... yes checking fcntl.h usability... yes checking fcntl.h presence... yes checking for fcntl.h... yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking malloc.h usability... yes checking malloc.h presence... yes checking for malloc.h... yes checking netdb.h usability... yes checking netdb.h presence... yes checking for netdb.h... yes checking netinet/in.h usability... yes checking netinet/in.h presence... yes checking for netinet/in.h... yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking for stdlib.h... (cached) yes checking for string.h... (cached) yes checking sys/param.h usability... yes checking sys/param.h presence... yes checking for sys/param.h... yes checking sys/socket.h usability... yes checking sys/socket.h presence... yes checking for sys/socket.h... yes checking sys/select.h usability... yes checking sys/select.h presence... yes checking for sys/select.h... yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking for unistd.h... (cached) yes checking for sys/types.h... (cached) yes checking sys/uio.h usability... yes checking sys/uio.h presence... yes checking for sys/uio.h... yes checking for uid_t in sys/types.h... yes checking for pid_t... yes checking for size_t... yes checking whether time.h and sys/time.h may both be included... yes checking for socklen_t usability... yes checking for working alloca.h... yes checking for alloca... yes checking for working memcmp... yes checking for dup2... yes checking for gethostbyaddr... yes checking for gethostbyname... yes checking for gethostbyaddr_r... yes checking for gethostbyname_r... yes checking for gethostname... yes checking for writev... yes checking for gethrtime... yes checking for gettimeofday... yes checking for inet_ntoa... yes checking for memchr... yes checking for memmove... yes checking for memset... yes checking for select... yes checking for socket... yes checking for strchr... yes checking for strerror... yes checking for strrchr... yes checking for strstr... yes checking for uname... yes checking for sysconf... yes checking for res_gethostbyname... no checking for res_gethostbyname in -lresolv... yes checking for clock_gettime... yes checking for mixed cygwin or msys and native VC++ environment... no checking for mixed cygwin and native MinGW environment... no checking if we mix cygwin with any native compiler... no checking if we mix msys with another native compiler... no checking for native win32 threads... no checking for pthread_create in -lpthread... yes checking pthread.h usability... yes checking pthread.h presence... yes checking for pthread.h... yes checking pthread/mit/pthread.h usability... no checking pthread/mit/pthread.h presence... no checking for pthread/mit/pthread.h... no checking size of short... (cached) 2 checking size of int... (cached) 4 checking size of long... (cached) 4 checking size of long long... (cached) 8 checking size of __int128_t... 0 checking for a working __sync_synchronize()... no checking for 32-bit __sync_add_and_fetch()... no checking for 64-bit __sync_add_and_fetch()... no checking for 128-bit __sync_add_and_fetch()... no checking for 32-bit __sync_fetch_and_and()... no checking for 64-bit __sync_fetch_and_and()... no checking for 128-bit __sync_fetch_and_and()... no checking for 32-bit __sync_fetch_and_or()... no checking for 64-bit __sync_fetch_and_or()... no checking for 128-bit __sync_fetch_and_or()... no checking for 32-bit __sync_val_compare_and_swap()... no checking for 64-bit __sync_val_compare_and_swap()... no checking for 128-bit __sync_val_compare_and_swap()... no checking for 32-bit __atomic_store_n()... no checking for 64-bit __atomic_store_n()... no checking for 128-bit __atomic_store_n()... no checking for 32-bit __atomic_load_n()... no checking for 64-bit __atomic_load_n()... no checking for 128-bit __atomic_load_n()... no checking for 32-bit __atomic_add_fetch()... no checking for 64-bit __atomic_add_fetch()... no checking for 128-bit __atomic_add_fetch()... no checking for 32-bit __atomic_fetch_and()... no checking for 64-bit __atomic_fetch_and()... no checking for 128-bit __atomic_fetch_and()... no checking for 32-bit __atomic_fetch_or()... no checking for 64-bit __atomic_fetch_or()... no checking for 128-bit __atomic_fetch_or()... no checking for 32-bit __atomic_compare_exchange_n()... no checking for 64-bit __atomic_compare_exchange_n()... no checking for 128-bit __atomic_compare_exchange_n()... no configure: creating ./config.status config.status: creating src/sparc-sun-solaris2.11/Makefile config.status: creating src/sparc-sun-solaris2.11/eidefs.mk config.status: creating src/sparc-sun-solaris2.11/config.h === Running configure in /export/home/itmxadm/otp-OTP-22.2/lib/megaco === ./configure 'CC=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc' 'CXX=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/CC' 'CXXFLAGS=-g' CFLAGS='-O' --disable-option-checking --cache-file=/dev/null --srcdir="/export/home/itmxadm/otp-OTP-22.2/lib/megaco" checking build system type... sparc-sun-solaris2.11 checking host system type... sparc-sun-solaris2.11 checking for gcc... /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... no checking whether /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc accepts -g... yes checking for /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc option to accept ISO C89... none needed checking for mixed cygwin or msys and native VC++ environment... no checking for mixed cygwin and native MinGW environment... no checking if we mix cygwin with any native compiler... no checking if we mix msys with another native compiler... no checking for flex... flex checking lex output file root... lex.yy checking lex library... -ll checking whether yytext is a pointer... yes checking for reentrant capable flex... yes checking how to run the C preprocessor... /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc -E checking for grep that handles long lines and -e... /usr/bin/ggrep checking for egrep... /usr/bin/ggrep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for native win32 threads... no checking for pthread_create in -lpthread... yes checking pthread.h usability... yes checking pthread.h presence... yes checking for pthread.h... yes checking pthread/mit/pthread.h usability... no checking pthread/mit/pthread.h presence... no checking for pthread/mit/pthread.h... no checking if we can add -Wdeclaration-after-statement to DED_WARN_FLAGS (via CFLAGS)... yes checking if we can add -Werror=return-type to DED_WERRORFLAGS (via CFLAGS)... yes checking if we can add -Werror=implicit to DED_WERRORFLAGS (via CFLAGS)... yes checking if we can add -Werror=undef to DED_WERRORFLAGS (via CFLAGS)... yes checking for ld... ld checking for static compiler flags... -Werror=undef -Werror=implicit -Werror=return-type -D_THREAD_SAFE -D_REENTRANT -DPOSIX_THREADS -D_POSIX_PTHREAD_SEMANTICS -DSTATIC_ERLANG_NIF -DSTATIC_ERLANG_DRIVER checking for basic compiler flags for loadable drivers... -O checking for compiler flags for loadable drivers... -Werror=undef -Werror=implicit -Werror=return-type -Wdeclaration-after-statement -Wall -Wstrict-prototypes -Wmissing-prototypes -D_THREAD_SAFE -D_REENTRANT -DPOSIX_THREADS -D_POSIX_PTHREAD_SEMANTICS -O checking for linker for loadable drivers... ld checking for linker flags for loadable drivers... -G checking for 'runtime library path' linker flag... -R checking for perl... perl configure: creating ./config.status config.status: creating examples/meas/Makefile configure: creating ./config.status config.status: creating examples/meas/Makefile config.status: creating src/flex/sparc-sun-solaris2.11/Makefile === Running configure in /export/home/itmxadm/otp-OTP-22.2/lib/odbc === ./configure 'CC=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc' 'CXX=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/CC' 'CXXFLAGS=-g' CFLAGS='-O' --disable-option-checking --cache-file=/dev/null --srcdir="/export/home/itmxadm/otp-OTP-22.2/lib/odbc" checking build system type... sparc-sun-solaris2.11 checking host system type... sparc-sun-solaris2.11 checking for gcc... /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... no checking whether /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc accepts -g... yes checking for /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc option to accept ISO C89... none needed checking for mixed cygwin or msys and native VC++ environment... no checking for mixed cygwin and native MinGW environment... no checking if we mix cygwin with any native compiler... no checking if we mix msys with another native compiler... no checking whether make sets $(MAKE)... yes checking for ld.sh... no checking for ld... ld checking for rm... /bin/rm checking for connect... no checking for socket in -lsocket... yes checking for gethostbyname... no checking for main in -lnsl... yes checking how to run the C preprocessor... /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc -E checking for grep that handles long lines and -e... /usr/bin/ggrep checking for egrep... /usr/bin/ggrep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking fcntl.h usability... yes checking fcntl.h presence... yes checking for fcntl.h... yes checking netdb.h usability... yes checking netdb.h presence... yes checking for netdb.h... yes checking for stdlib.h... (cached) yes checking for string.h... (cached) yes checking sys/socket.h usability... yes checking sys/socket.h presence... yes checking for sys/socket.h... yes checking winsock2.h usability... no checking winsock2.h presence... no checking for winsock2.h... no checking windows.h usability... no checking windows.h presence... no checking for windows.h... no checking for sql.h... no checking for sqlext.h... no checking for an ANSI C-conforming const... yes checking for size_t... yes checking for struct sockaddr_in6.sin6_addr... yes checking for memset... yes checking for socket... yes checking for native win32 threads... no checking for pthread_create in -lpthread... yes checking pthread.h usability... yes checking pthread.h presence... yes checking for pthread.h... yes checking pthread/mit/pthread.h usability... no checking pthread/mit/pthread.h presence... no checking for pthread/mit/pthread.h... no checking size of void *... 4 checking for odbc in standard locations... no configure: creating ./config.status config.status: creating c_src/sparc-sun-solaris2.11/Makefile configure: WARNING: No odbc library found skipping odbc configure: WARNING: "ODBC library - header check failed" configure: WARNING: "ODBC library - link check failed" === Running configure in /export/home/itmxadm/otp-OTP-22.2/lib/snmp === ./configure 'CC=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc' 'CXX=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/CC' 'CXXFLAGS=-g' CFLAGS='-O' --disable-option-checking --cache-file=/dev/null --srcdir="/export/home/itmxadm/otp-OTP-22.2/lib/snmp" checking build system type... sparc-sun-solaris2.11 checking host system type... sparc-sun-solaris2.11 checking for perl... perl configure: creating ./config.status config.status: creating mibs/Makefile === Running configure in /export/home/itmxadm/otp-OTP-22.2/lib/wx === ./configure 'CC=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc' 'CXX=/export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/CC' 'CXXFLAGS=-g' CFLAGS='-O' --disable-option-checking --cache-file=/dev/null --srcdir="/export/home/itmxadm/otp-OTP-22.2/lib/wx" checking build system type... sparc-sun-solaris2.11 checking host system type... sparc-sun-solaris2.11 checking for gcc... /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... no checking whether /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc accepts -g... yes checking for /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc option to accept ISO C89... none needed checking whether we are using the GNU C++ compiler... no checking whether /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/CC accepts -g... yes checking for ranlib... ranlib checking how to run the C preprocessor... /export/home/itmxadm/STUDIO12/OracleDeveloperStudio12.6-solaris-sparc-bin/developerstudio12.6/bin/cc -E configure: Building for solaris2.11 checking for mixed cygwin or msys and native VC++ environment... no checking for mixed cygwin and native MinGW environment... no checking if we mix cygwin with any native compiler... no checking if we mix msys with another native compiler... no checking for grep that handles long lines and -e... /usr/bin/ggrep checking for egrep... /usr/bin/ggrep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking size of void *... 4 checking GL/gl.h usability... yes checking GL/gl.h presence... yes checking for GL/gl.h... yes checking GL/glu.h usability... yes checking GL/glu.h presence... yes checking for GL/glu.h... yes checking for debug build of wxWidgets... checking for wx-config... /usr/bin/wx-config checking for wxWidgets version >= 2.8.4 (--unicode --debug=yes)... yes (version 3.1.3) checking for wxWidgets static library... no checking for standard build of wxWidgets... checking for wx-config... (cached) /usr/bin/wx-config checking for wxWidgets version >= 2.8.4 (--unicode --debug=no)... yes (version 3.1.3) checking for wxWidgets static library... no checking if wxwidgets have opengl support... no checking for GLintptr... yes checking for GLintptrARB... yes checking for GLchar... yes checking for GLcharARB... yes checking for GLhalfARB... yes checking for GLint64EXT... yes checking GLU Callbacks uses Tiger Style... no checking for wx/stc/stc.h... no checking if we can link wxwidgets programs... no configure: creating sparc-sun-solaris2.11/config.status config.status: creating config.mk config.status: creating c_src/Makefile configure: WARNING: Can not find wx/stc/stc.h -Wall -fPIC -O -Wno-deprecated-declarations -fomit-frame-pointer -fno-strict-aliasing -D_GNU_SOURCE -D_THREAD_SAFE -D_REENTRANT -I/export/home/itmxadm/WX/wxWidgets-3.1.3/build_x11/lib/wx/include/x11univ-unicode-3.1 -I/export/home/itmxadm/WX/wxWidgets-3.1.3/include -D_FILE_OFFSET_BITS=64 -DWXUSINGDLL -D__WXUNIVERSAL__ -D__WXX11__ -pthreads -D_REENTRANT configure: WARNING: Can not link wx program are all developer packages installed? ********************************************************************* ********************** APPLICATIONS DISABLED ********************** ********************************************************************* jinterface : No Java compiler found odbc : ODBC library - link check failed ********************************************************************* ********************************************************************* ********************** APPLICATIONS INFORMATION ******************* ********************************************************************* wx : wxWidgets don't have gl support, wx will NOT be useable wxWidgets don't have wxStyledTextControl (stc.h), wx will NOT be useable Can not link the wx driver, wx will NOT be useable ********************************************************************* ********************************************************************* ********************** DOCUMENTATION INFORMATION ****************** ********************************************************************* documentation : fop is missing. Using fakefop to generate placeholder PDF files. ********************************************************************* PRE-BUILD SUCCESSFUL (total time: 1m 0s) - root@REDACTED # which wx-config && wx-config --version-full /usr/bin/wx-config 3.1.3.0 and configure below. ./configure CC=${IDE_CC} CXX=${IDE_CXX} CFLAGS="-O" CXXFLAGS=-g --enable-compat28 --prefix=/usr/local. Above config above after build successful can't used wx. ???????? ?. 27 ?.?. 2019 ???? 18:13 Schneider ????????: > Maybe the authors of https://adoptingerlang.org/docs/development/setup/ would > be kind enough to add such details? > > > Op 27 dec. 2019 om 10:40 heeft Craig Everett het volgende > geschreven: > > ? > > Roger Lipscombe???????? > > > On Wed, 25 Dec 2019 at 17:57, zxq9 wrote: > > > > > I made notes for myself for building with Kerl on Debian/Ubuntu here: > > > http://zxq9.com/archives/1603 > > > > FWIW, my notes are here: > > http://blog.differentpla.net/blog/2019/01/30/erlang-build-prerequisites/ > > -- the package list is not quite the same. > > I wonder what the best way to aggregate this sort if information in an > easy to navigate way would be? AFAIK we don't have a wiki space for this, > and putting one up for such limited scope seems overkill. > > I wouldn't mind maintaining a page of build-req pages (that is, "do these > steps before running kerl or make if you are on system X") for various > systems, but I would need people to forward me their notes for different > systems and configurations. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikpelinux@REDACTED Mon Dec 30 17:48:32 2019 From: mikpelinux@REDACTED (Mikael Pettersson) Date: Mon, 30 Dec 2019 17:48:32 +0100 Subject: build erlang otp 22.2 but can't used observer In-Reply-To: References: Message-ID: On Mon, Dec 30, 2019 at 4:48 PM Rareman S wrote: > > Hi, > > Here is my pre-build with erlang OTP22.2. ... > === Running configure in /export/home/itmxadm/otp-OTP-22.2/lib/wx === ... > checking if we can link wxwidgets programs... no > configure: creating sparc-sun-solaris2.11/config.status > config.status: creating config.mk > config.status: creating c_src/Makefile > configure: WARNING: Can not find wx/stc/stc.h -Wall -fPIC -O -Wno-deprecated-declarations -fomit-frame-pointer -fno-strict-aliasing -D_GNU_SOURCE -D_THREAD_SAFE -D_REENTRANT -I/export/home/itmxadm/WX/wxWidgets-3.1.3/build_x11/lib/wx/include/x11univ-unicode-3.1 -I/export/home/itmxadm/WX/wxWidgets-3.1.3/include -D_FILE_OFFSET_BITS=64 -DWXUSINGDLL -D__WXUNIVERSAL__ -D__WXX11__ -pthreads -D_REENTRANT > configure: WARNING: Can not link wx program are all developer packages installed? ... > wx : wxWidgets don't have gl support, wx will NOT be useable > wxWidgets don't have wxStyledTextControl (stc.h), wx will NOT be useable > Can not link the wx driver, wx will NOT be useable ... > ./configure CC=${IDE_CC} CXX=${IDE_CXX} CFLAGS="-O" CXXFLAGS=-g --enable-compat28 --prefix=/usr/local. Above config above after build successful can't used wx. Given the wx-related warnings above, it's not surprising that wx doesn't work for you. You'll need to inspect the config.log file for lib/wx/ so see what the errors were, but off-hand it looks like an incomplete wxWidgets installation. From frank.muller.erl@REDACTED Mon Dec 30 19:58:00 2019 From: frank.muller.erl@REDACTED (Frank Muller) Date: Mon, 30 Dec 2019 19:58:00 +0100 Subject: Port locks with high time under LCNT In-Reply-To: References: Message-ID: I was able to find out which lock is used by run_queue: https://gist.github.com/frankmullerl/008174c6594ca27584ac7f4e6724bee5 Question: how can I check the lock behind #Port for example? /Frank Hi All > > My custom-made web server which only serves only static files (a la Hugo: > https://gohugo.io/) is showing this under LCNT: > https://gist.github.com/frankmullerl/008174c6594ca27584ac7f4e6724bee5 > > Can someone explain how to dig further to understand what?s going on and > why these Port locks are taking so much time? > > /Frank > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex@REDACTED Mon Dec 30 22:24:24 2019 From: alex@REDACTED (alex@REDACTED) Date: Mon, 30 Dec 2019 16:24:24 -0500 Subject: build erlang otp 22.2 but can't used observer In-Reply-To: References: Message-ID: Yeah, looks like it cannot find the wx shared libraries.? Verify the location of the wx files.? You might need to configure /etc/ld.so.conf to identify where they are located.? For reference, I personally compile wxWidgets from scratch with GTK3 and Mesa already installed with... export CFLAGS="-O3 -fPIC -march=native -pipe" export CXXFLAGS="$CFLAGS" CFLAGS CXXFLAGS ./configure \ --prefix=/usr \ ? --libdir=/usr/lib64 \ ? --sysconfdir=/etc \ ? --enable-mediactrl \ ? --with-opengl \ ? --enable-graphics_ctx \ ? --with-gtk=3 \ ? --enable-unicode \ ? --enable-compat28 \ ? --enable-plugins \ ? --enable-stc \ ? --enable-webview \ ? $wk \ ? $stl \ ? $st BTW, the configuration I use to compile Erlang is... CFLAGS ./configure \ ? --prefix=/usr \ ? --libdir=/usr/lib64 \ ? --without-odbc \ ? --without-megaco \ ? --enable-threads \ ? --enable-dirty-schedulers \ ? --enable-smp-support \ ? --enable-hipe ...follow by make and make install.? These are part of bigger scripts I use to make packages for my Linux distro, but this is practically it. Cheers, Alex On 12/30/19 11:48 AM, Mikael Pettersson wrote: > On Mon, Dec 30, 2019 at 4:48 PM Rareman S wrote: >> Hi, >> >> Here is my pre-build with erlang OTP22.2. > ... >> === Running configure in /export/home/itmxadm/otp-OTP-22.2/lib/wx === > ... >> checking if we can link wxwidgets programs... no >> configure: creating sparc-sun-solaris2.11/config.status >> config.status: creating config.mk >> config.status: creating c_src/Makefile >> configure: WARNING: Can not find wx/stc/stc.h -Wall -fPIC -O -Wno-deprecated-declarations -fomit-frame-pointer -fno-strict-aliasing -D_GNU_SOURCE -D_THREAD_SAFE -D_REENTRANT -I/export/home/itmxadm/WX/wxWidgets-3.1.3/build_x11/lib/wx/include/x11univ-unicode-3.1 -I/export/home/itmxadm/WX/wxWidgets-3.1.3/include -D_FILE_OFFSET_BITS=64 -DWXUSINGDLL -D__WXUNIVERSAL__ -D__WXX11__ -pthreads -D_REENTRANT >> configure: WARNING: Can not link wx program are all developer packages installed? > ... >> wx : wxWidgets don't have gl support, wx will NOT be useable >> wxWidgets don't have wxStyledTextControl (stc.h), wx will NOT be useable >> Can not link the wx driver, wx will NOT be useable > ... >> ./configure CC=${IDE_CC} CXX=${IDE_CXX} CFLAGS="-O" CXXFLAGS=-g --enable-compat28 --prefix=/usr/local. Above config above after build successful can't used wx. > Given the wx-related warnings above, it's not surprising that wx > doesn't work for you. You'll need to inspect the config.log file for > lib/wx/ so see what the errors were, but off-hand it looks like an > incomplete wxWidgets installation. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex@REDACTED Mon Dec 30 22:33:03 2019 From: alex@REDACTED (alex@REDACTED) Date: Mon, 30 Dec 2019 16:33:03 -0500 Subject: build erlang otp 22.2 but can't used observer In-Reply-To: References: Message-ID: Oops, disregard the $wk, $stl and $st variable calls.? Sorry, but I'm half awake, half sleep.? Actually "st" is for enabling shared library, which is needed.? See below... On 12/30/19 4:24 PM, alex@REDACTED wrote: > Yeah, looks like it cannot find the wx shared libraries.? Verify the > location of the wx files.? You might need to configure /etc/ld.so.conf > to identify where they are located.? For reference, I personally > compile wxWidgets from scratch with GTK3 and Mesa already installed > with... > > export CFLAGS="-O3 -fPIC -march=native -pipe" > export CXXFLAGS="$CFLAGS" > > CFLAGS CXXFLAGS ./configure \ > --prefix=/usr \ > ? --libdir=/usr/lib64 \ > ? --sysconfdir=/etc \ > ? --enable-mediactrl \ > ? --with-opengl \ > ? --enable-graphics_ctx \ > ? --with-gtk=3 \ > ? --enable-unicode \ > ? --enable-compat28 \ > ? --enable-plugins \ > ? --enable-stc \ > ? --enable-webview \ ???? --enable-shared > > BTW, the configuration I use to compile Erlang is... > > CFLAGS ./configure \ > ? --prefix=/usr \ > ? --libdir=/usr/lib64 \ > ? --without-odbc \ > ? --without-megaco \ > ? --enable-threads \ > ? --enable-dirty-schedulers \ > ? --enable-smp-support \ > ? --enable-hipe > > ...follow by make and make install.? These are part of bigger scripts > I use to make packages for my Linux distro, but this is practically it. > > Cheers, > Alex > > On 12/30/19 11:48 AM, Mikael Pettersson wrote: >> On Mon, Dec 30, 2019 at 4:48 PM Rareman S wrote: >>> Hi, >>> >>> Here is my pre-build with erlang OTP22.2. >> ... >>> === Running configure in /export/home/itmxadm/otp-OTP-22.2/lib/wx === >> ... >>> checking if we can link wxwidgets programs... no >>> configure: creating sparc-sun-solaris2.11/config.status >>> config.status: creating config.mk >>> config.status: creating c_src/Makefile >>> configure: WARNING: Can not find wx/stc/stc.h -Wall -fPIC -O -Wno-deprecated-declarations -fomit-frame-pointer -fno-strict-aliasing -D_GNU_SOURCE -D_THREAD_SAFE -D_REENTRANT -I/export/home/itmxadm/WX/wxWidgets-3.1.3/build_x11/lib/wx/include/x11univ-unicode-3.1 -I/export/home/itmxadm/WX/wxWidgets-3.1.3/include -D_FILE_OFFSET_BITS=64 -DWXUSINGDLL -D__WXUNIVERSAL__ -D__WXX11__ -pthreads -D_REENTRANT >>> configure: WARNING: Can not link wx program are all developer packages installed? >> ... >>> wx : wxWidgets don't have gl support, wx will NOT be useable >>> wxWidgets don't have wxStyledTextControl (stc.h), wx will NOT be useable >>> Can not link the wx driver, wx will NOT be useable >> ... >>> ./configure CC=${IDE_CC} CXX=${IDE_CXX} CFLAGS="-O" CXXFLAGS=-g --enable-compat28 --prefix=/usr/local. Above config above after build successful can't used wx. >> Given the wx-related warnings above, it's not surprising that wx >> doesn't work for you. You'll need to inspect the config.log file for >> lib/wx/ so see what the errors were, but off-hand it looks like an >> incomplete wxWidgets installation. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson@REDACTED Tue Dec 31 15:52:46 2019 From: S.J.Thompson@REDACTED (Simon Thompson) Date: Tue, 31 Dec 2019 15:52:46 +0100 Subject: SBLP 2020 - first call for papers Message-ID: Call for Papers - XXIV Brazilian Symposium on Programming Languages (SBLP 2020) Natal, Brazil, September 21-25, 2020 Submission link: https://easychair.org/conferences/?conf=sblp2020 SBLP 2020 is the 24th edition of the Brazilian Symposium on Programming Languages. It is promoted by the Brazilian Computer Society (SBC) and constitutes a forum for researchers, students and professionals to present and discuss ideas and innovations in the design, definition, analysis, implementation and practical use of programming languages. SBLP's first edition was in 1996. Since 2010, it is part of CBSoft, the Brazilian Conference on Software: Theory and Practice. Submission Guidelines ________________________________________________________________________________ Papers can be written in Portuguese or English. Submissions in English are encouraged because the proceedings will be indexed in the ACM Digital Library. The acceptance of a paper implies that at least one of its authors will register for the symposium to present it. Papers must be original and not simultaneously submitted to another journal or conference. SBLP 2020 will use a lightweight double-blind review process. The manuscripts should be submitted for review anonymously (i.e., without listing the author?s names on the paper) and references to own work should be made in third person. Papers must be submitted electronically (in PDF format) via the Easychair System: http://www.easychair.org/conferences/?conf=sblp2020 The following paper categories are welcome (page limits include figures, references and appendices): Full papers: up to 8 pages long in ACM 2-column conference format, available at http://www.acm.org/publications/proceedings-template Short papers: up to 3 pages in the same format. Short papers can discuss new ideas which are at an early stage of development or can report partial results of on-going dissertations or theses. List of Topics (related but not limited to the following) ________________________________________________________________________________ ? Programming paradigms and styles, scripting and domain-specific languages and support for real-time, service-oriented, multi-threaded, parallel, and distributed programming ? Program generation and transformation ? Formal semantics and theoretical foundations: denotational, operational, algebraic and categorical ? Program analysis and verification, type systems, static analysis and abstract interpretation ? Programming language design and implementation, programming language environments, compilation and interpretation techniques Publication ________________________________________________________________________________ SBLP proceedings will be published in ACM's digital library. As in previous editions, authors of selected regular papers will be invited to submit an extended version of their work to be considered for publication in a journal's special issue. Since 2009, selected papers of each SBLP edition are being published in a special issue of Science of Computer Programming, by Elsevier. Important dates ________________________________________________________________________________ Abstract submission: 31 May, 2020 Paper submission: 7 June, 2020 (strict) Author notification: 24 July, 2020 Camera ready deadline: 9 August 2020 Program Committee ________________________________________________________________________________ Alex Garcia Instituto Militar de Engenharia Alvaro Moreira Universidade Federal do Rio Grande do Sul Anamaria Moreira Universidade Federal do Rio de Janeiro Andr? Murbach Maidl Pontif?cia Universidade Cat?lica do Paran? Bernhard Scholz The University of Sydney Beta Ziliani Universidad Nacional de C?rdoba Christiano Braga Universidade Federal Fluminense Cristiano Vasconcellos Universidade do Estado de Santa Catarina Fernando Castor Universidade Federal de Pernambuco Fernando Pereira Universidade Federal de Minas Gerais Francisco Sant'Anna Unversidade Estadual do Rio de Janeiro Hans-Wolfgang Loidl Heriot-Watt University Henrique Reb?lo Universidade Federal de Pernambuco Jo?o Paulo Fernandes University of Coimbra Krishna Nandivada IIT Madras Laure Gonnord University of Lyon Leonardo Reis Universidade Federal de Juiz de Fora Lucilia Figueiredo Universidade Federal de Ouro Preto Marisa Bigonha Universidade Federal de Minas Gerais Martin Musicante Universidade Federal do Rio Grande do Norte Pavlos Petoumenos University of Manchester Renato Cerqueira IBM Research Roberto Ierusalimschy PUC-Rio Rodrigo Ribeiro Universidade Federal de Ouro Preto Sandro Rigo Universidade de Campinas S?rgio Medeiros Universidade Federal do Rio Grande do Norte (Chair) Simon Thompson University of Kent Tomofumi Yuki INRIA Contact ________________________________________________________________________________ All questions about submissions should be emailed to S?rgio Medeiros (sergiomedeiros@REDACTED) Simon Thompson | Professor of Logic and Computation School of Computing | University of Kent | Canterbury, CT2 7NF, UK s.j.thompson@REDACTED | M +44 7986 085754 | W www.cs.kent.ac.uk/~sjt From pierrefenoll@REDACTED Tue Dec 31 17:12:11 2019 From: pierrefenoll@REDACTED (Pierre Fenoll) Date: Tue, 31 Dec 2019 17:12:11 +0100 Subject: Matching fun M:F/A Message-ID: Hi, Since a few releases, the value fun M:F/A (provided M, F & A are bound) is a literal. It can be read with file:consult/1 as well as erlang:binary_to_term/1. Running OTP 22, funs can be compared: eq(F) -> %% Compiles & works as expected. F == fun lists:keysearch/3. However MFAs cannot be matched: %% syntax error before: 'fun' fmatch(fun lists:keysearch/3) -> true; fmatch(_) -> false. cmatch(F) -> case F of %% illegal pattern fun lists:keysearch/3 -> true; %% illegal guard expression X when X == fun lists:keysearch/3 -> true; %% illegal pattern fun lists:_/3 -> inte; fun _:handle_cast/2 -> resting; _ -> false end. Is this intended? I believe it would be interesting to allow such patterns as well as for fun _:_/_. This could help in optimization through specialization and probably would make for some new approaches. Among all funs only fully qualified funs can be expressed this way so this behaviour could be a bit surprising to some but MFAs are already comparable today so I'd say we're already halfway there. Thoughts? PS: it seems one is no longer able to log into bugs.erlang.org with GitHub credentials as https://bugs.erlang.org/login.jsp?os_destination=%2Fdefault.jsp doesn't provide the option anymore. Is this normal? -------------- next part -------------- An HTML attachment was scrubbed... URL: