From erlang@REDACTED Wed Mar 1 02:04:27 2006 From: erlang@REDACTED (Michael McDaniel) Date: Tue, 28 Feb 2006 17:04:27 -0800 Subject: Erlang: The Movie In-Reply-To: <655ACDFB-5860-4BA6-9475-F8D8E9E98CE8@citeulike.org> References: <655ACDFB-5860-4BA6-9475-F8D8E9E98CE8@citeulike.org> Message-ID: <20060301010427.GB9697@delora.autosys.us> On Tue, Feb 28, 2006 at 03:23:24PM +0000, Richard Cameron wrote: > > It looks like "Erlang: The Movie" has resurfaced in all its glory on > Google Video: > > http://video.google.com/videoplay?docid=-5830318882717959520 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Thank you ! How fun to see this bit of Erlang history. ~Michael From kruegger@REDACTED Wed Mar 1 03:10:06 2006 From: kruegger@REDACTED (Stephen Han) Date: Tue, 28 Feb 2006 18:10:06 -0800 Subject: http:request returns {error, session_remotly_closed} In-Reply-To: <20060226171532.2772.qmail@web32501.mail.mud.yahoo.com> References: <20060226171532.2772.qmail@web32501.mail.mud.yahoo.com> Message-ID: <86f1f5350602281810l6690a192u3730bcfc718bee52@mail.gmail.com> I ran your sample under Windows XP service pack 2 with OTP R10B-9. I could not reproduce the problem. One of my application do http:request every 5 minutes. It has been doing that since 2005 June. So far didn't see that error since. But mine do http:request every 5 min so yours probably do it really fast without interval. I think it may be the persistent connection that confuses whoever that is, http module or web server. :-) Try asynchronously and see if happens again. regards, On 2/26/06, Soren Gronvald wrote: > > hi, > > I have a problem with http:request that I think could > be a bug (but of course it could be due to ignorance > on my part). > > I am trying to read a url from yahoo.com in the > simplest possible way - http:request(Url)- but it > fails with the error > > {error,session_remotly_closed} > > This of course indicates that it is the server that > causes trouble, but I can read the same url with other > technologies (java and visual basic) with no hassle at > all. And I can read similar but shorter urls from > yahaoo.com with erlang http:request without problems. > > this url can be read successfully: > U1 = > " > http://finance.yahoo.com/d/quotes.csv?s=IBM,GE,GM&f=sl1d1t1c1baphgvn&e=.csv > ". > > and this fails. > U2 = > " > http://finance.yahoo.com/d/quotes.csv?s=IBM,GE,GM,F,PKD,GW&f=sl1d1t1c1baphgvn&e=.csv > ". > > both should return a comma separated list of stock > exchange quotes from yahoo.com. > > As I can read the same url with other technologies it > makes me think that there is a bug in the http client > module. Or, could it be that I need some > HTTPOptins/Options to make it work? > > I am using erlang R10B-9 on Windows XP SP2 > professional edition with all updates applied. I have > downloaded the precompiled version from erlang.org. > > Best regards > Gronvald > > > I have included program examples below > > this erlang program shows the problem > > % this works > testurl(1) -> > U = > " > http://finance.yahoo.com/d/quotes.csv?s=IBM,GE,GM&f=sl1d1t1c1baphgvn&e=.csv > ", > geturl(U); > > % this exits > testurl(2) -> > U = > " > http://finance.yahoo.com/d/quotes.csv?s=IBM,GE,GM,F,PKD,GW&f=sl1d1t1c1baphgvn&e=.csv > ", > geturl(U). > > geturl(Url) -> > application:start(inets), > X = http:request(Url), > case X of > {ok, { {_Version, 200, _ReasonPhrase}, _Headers, > Body} } -> > Body; > _ -> X > end. > > > this java program will handle same situation without > error > > import java.net.*; > import java.io.*; > > public class Geturl { > > public static void main(String[] args) throws > Exception { > > String u2 = > > " > http://finance.yahoo.com/d/quotes.csv?s=IBM,GE,GM,F,PKD,GW&f=sl1d1t1c1baphgvn&e=.csv > "; > > URL url = new URL(u2); > InputStream is = new > BufferedInputStream(url.openStream()); > Reader r = new InputStreamReader(is); > > int c = r.read(); > while(c != -1) { > System.out.print((char)c); > c = r.read(); > } > } > } > > > > __________________________________________________ > Do You Yahoo!? > Tired of spam? Yahoo! Mail has the best spam protection around > http://mail.yahoo.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akopa@REDACTED Wed Mar 1 04:03:13 2006 From: akopa@REDACTED (Matthew D Swank) Date: Tue, 28 Feb 2006 21:03:13 -0600 Subject: Erlang: The Movie In-Reply-To: <655ACDFB-5860-4BA6-9475-F8D8E9E98CE8@citeulike.org> References: <655ACDFB-5860-4BA6-9475-F8D8E9E98CE8@citeulike.org> Message-ID: <44050EF1.1090008@charter.net> Richard Cameron wrote: > > It looks like "Erlang: The Movie" has resurfaced in all its glory on > Google Video: > > http://video.google.com/videoplay?docid=-5830318882717959520 > Very affecting performances all the way around :) -- "You do not really understand something unless you can explain it to your grandmother." ? Albert Einstein. From marc.vanwoerkom@REDACTED Wed Mar 1 08:58:43 2006 From: marc.vanwoerkom@REDACTED (User Marc van Woerkom) Date: Wed, 01 Mar 2006 08:58:43 +0100 Subject: Erlang: The Movie In-Reply-To: <44050EF1.1090008@charter.net> References: <655ACDFB-5860-4BA6-9475-F8D8E9E98CE8@citeulike.org> <44050EF1.1090008@charter.net> Message-ID: <44055433.9040609@fernuni-hagen.de> Matthew D Swank wrote: > Richard Cameron wrote: >> >> It looks like "Erlang: The Movie" has resurfaced in all its glory on >> Google Video: >> >> http://video.google.com/videoplay?docid=-5830318882717959520 >> > > Very affecting performances all the way around :) > > I love this review: "gerade gefunden: Erlang the Movie . Ich kann mir nicht helfen, aber das hat irgendwie was von einer Mischung aus Sendung mit der Maus und Monty Python. @Tim: Das ist doch mal ein echter Geekfilm ;-)" http://fukami.vakuum.net/archives/2006/01/23/erlang-the-movie/ Which means "a bit like a mix of Sendung mit der Maus (children program, which is famous for short films that explain how things work) and Monty Python. A real geek movie". :-) You watch it and think "Is it meant serious? Or are these folks just having fun trying out the new departments video camera instead of working? How do they manage not bursting out laughing?". I really need the secret file, the one with the outtakes. BTW I showed this on a laptop during coffeebreak, but I fear it didn't brainwash the attending Java programmers. However they liked the frame with the old X11 surface and asked me what desktop I was using on the laptop. :-) Regards, Marc From joe.armstrong@REDACTED Wed Mar 1 09:31:10 2006 From: joe.armstrong@REDACTED (Joe Armstrong (AL/EAB)) Date: Wed, 1 Mar 2006 09:31:10 +0100 Subject: Erlang: The Movie Message-ID: > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of > Matthew D Swank > Sent: den 1 mars 2006 04:03 > To: Erlang Questions > Subject: Re: Erlang: The Movie > > Richard Cameron wrote: > > > > It looks like "Erlang: The Movie" has resurfaced in all its > glory on > > Google Video: > > > > http://video.google.com/videoplay?docid=-5830318882717959520 > > > > Very affecting performances all the way around :) Note the differnce in my hair colour - all this Erlang stuff has made my hair go gray! /Joe > > > -- > "You do not really understand something unless you can > explain it to your grandmother." - Albert Einstein. > From ulf.wiger@REDACTED Wed Mar 1 10:37:18 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Wed, 1 Mar 2006 10:37:18 +0100 Subject: keysort + removing duplicate keys Message-ID: At least I found this slightly surprising: 2> lists:ukeysort(1,[{1,a},{1,b}]). [{1,a},{1,b}] 3> lists:ukeysort(1,[{1,a},{1,a}]). [{1,a}] The man page says that lists:ukeysort/2 removes consecutive duplicates. I just assumed that it meant _duplicate keys_, but apparently not. Perhaps the manual could emphasize this point, for the sake of dunces like me? I went on to orddict: 4> orddict:from_list([{'$5',7},{'$4',6},{'$3',5},{'$2',4},{'$2',3},{'$1',2} ,{'$1',1}]). [{'$1',1},{'$2',3},{'$3',5},{'$4',6},{'$5',7}] (That is, I have a list with proplist semantics, and want to produce a list of values sorted by the lexical order of the keys.) orddict:from_list(L) seems to do what I want, but the man page doesn't mention how orddict goes about treating consecutive duplicates. Wouldn't this sort information be useful? /Ulf W From luke@REDACTED Wed Mar 1 10:40:52 2006 From: luke@REDACTED (Luke Gorrie) Date: Wed, 01 Mar 2006 10:40:52 +0100 Subject: cvs rtag -r branch -d yesterday my_branch_yesterday (workaround) References: <4404842E.7080902@comcast.net> Message-ID: Ernie Makris writes: > Or you can switch to subversion:) Incitement to switch version control systems is a serious offense here in the Workers' Republic of Synapse! You have been tried and sentenced to death in absentia. From ulf.wiger@REDACTED Wed Mar 1 11:10:46 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Wed, 1 Mar 2006 11:10:46 +0100 Subject: rfc: rdbms - new type system Message-ID: My solution will be to use my own match spec evaluator. I did some benchmarking, and the difference in run-time cost is minimal, partly because my data is already on the process heap, and the built-in match specs must be "compiled" before use in match_spec_run. Now I can do this: 1> rdbms_ms:run_ms([{1,["deep"," list"]}], [{{'$1','$2'},[],[{{'$1',{list_to_binary,'$2'}}}]}]). [{1, <<100,101,101,112,108,105,115,116>>}] which of course is very nice for compacting the database. The corresponding output filter would be: 2> rdbms_ms:run_ms(v(-1),[{{'$1','$2'},[],[{{'$1',{binary_to_list,'$2'}}}]} ]). [{1,"deep list"}] if one doesn't want to mess with binaries in the program. My match spec library also has the added bonus of 'let' and 'subterm' operators in the guards, to enable recursive matching. (: /Ulf W > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Ulf > Wiger (AL/EAB) > Sent: den 28 februari 2006 11:10 > To: erlang-questions@REDACTED > Subject: RE: rfc: rdbms - new type system > > > Ulf Wiger wrote: > > > > I'm toying with the idea of allowing a match specification as a > > table-level 'type' specification: > > > > Ms = [{{1,2,'$1'}, [{is_integer,'$1'}], [{{1,2,{float,'$1'}}}]}]. > > [{{1,2,'$1'},[{is_integer,'$1'}],[{{1,2,{float,'$1'}}}]}] > > 10> ets:match_spec_run([{1,2,3}],ets:match_spec_compile(Ms)). > > > > [{1,2,3.00000}] > > > Why did this work at all, btw? > > Re-reading the manual, I couldn't find mention of > {float,'$1'} as a valid term construct. > > After having inserted code in rdbms to allow for a match spec > as an input or output filter (basically a term rewriting > filter, or just a record-level type check), I started > thinking that perhaps the most useful rewriting op of all > would be list_to_binary (and binary_to_list in the output filter) > > But list_to_binary obviously doesn't work in a match spec. > Why not? And why does {float,'$1'} work? > > /Ulf W > From bjarne@REDACTED Wed Mar 1 11:12:46 2006 From: bjarne@REDACTED (=?Windows-1252?Q?Bjarne_D=E4cker?=) Date: Wed, 1 Mar 2006 11:12:46 +0100 Subject: Erlang: The Movie References: <655ACDFB-5860-4BA6-9475-F8D8E9E98CE8@citeulike.org> <44050EF1.1090008@charter.net> <44055433.9040609@fernuni-hagen.de> Message-ID: <004301c63d18$c52232a0$651069d4@segeltorp> Hi > You watch it and think "Is it meant serious? Or are these folks just > having fun trying out the new departments video camera instead of > working? How do they manage not bursting out laughing?". I really need > the secret file, the one with the outtakes. > > BTW I showed this on a laptop during coffeebreak, but I fear it didn't > brainwash the attending Java programmers. However they liked the frame > with the old X11 surface and asked me what desktop I was using on the > laptop. :-) The International Switching Symposium used to be a very important conference for the telecomms industry and in 1990 it took place in Stockholm. The conference included a day of technical visits and many were to Ericsson. The Computer Science Laboratory had prepared a presentation which was run eight times during one day. Afterwards Joe and others decided to video record the demo for posterity and for others to enjoy. That seems to have been a very good idea! Bjarne From nils.muellner@REDACTED Wed Mar 1 11:50:36 2006 From: nils.muellner@REDACTED (=?windows-1252?Q?Nils_M=FCllner?=) Date: Wed, 01 Mar 2006 11:50:36 +0100 Subject: Erlang: The Movie In-Reply-To: <004301c63d18$c52232a0$651069d4@segeltorp> References: <655ACDFB-5860-4BA6-9475-F8D8E9E98CE8@citeulike.org> <44050EF1.1090008@charter.net> <44055433.9040609@fernuni-hagen.de> <004301c63d18$c52232a0$651069d4@segeltorp> Message-ID: <44057C7C.6030108@heh.uni-oldenburg.de> Bjarne D?cker schrieb: > Hi > > >> You watch it and think "Is it meant serious? Or are these folks just >> having fun trying out the new departments video camera instead of >> working? How do they manage not bursting out laughing?". I really need >> the secret file, the one with the outtakes. >> >> BTW I showed this on a laptop during coffeebreak, but I fear it didn't >> brainwash the attending Java programmers. However they liked the frame >> with the old X11 surface and asked me what desktop I was using on the >> laptop. :-) >> > > The International Switching Symposium used to be a very important conference > for the telecomms industry and in 1990 it took place in Stockholm. The > conference included a day of technical visits and many were to Ericsson. The > Computer Science Laboratory had prepared a presentation which was run eight > times during one day. Afterwards Joe and others decided to video record the > demo for posterity and for others to enjoy. That seems to have been a very > good idea! > > Bjarne > > > > i still wait for the wazuuuuuuuuuuuuuuup ;-) (in case you dont know, video.google.com for budweise advertisements!!!) nils From eduardo@REDACTED Wed Mar 1 13:22:15 2006 From: eduardo@REDACTED (Eduardo Figoli (INS)) Date: Wed, 1 Mar 2006 10:22:15 -0200 Subject: PIDs an priority References: <012201c63beb$8db328c0$4a00a8c0@Inswitch251> Message-ID: <06ba01c63d2a$c9042290$4a00a8c0@Inswitch251> Thanks Ulf, Ok that will enter the process as quickly as possible and also make the process high priority. Do you know what does imply in ERTS making a process high priority. I mean, for example 1000 child processes all of the them high priority, will it be a time when the main process (the one which spawns child processes) starts queueing messages? more CPU power to handle thousands with good response times? regards, Eduardo ----- Original Message ----- From: "Ulf Wiger" To: "Eduardo Figoli (INS)" ; Sent: Monday, February 27, 2006 9:50 PM Subject: Re: PIDs an priority Den 2006-02-27 23:16:15 skrev Eduardo Figoli (INS) : > I have seen that ERTS makes priority to spawning processes rather than > processing the ones already spawned.How does this works in ERTS? Spawn is an asynchronous operation. The spawned process will be put in the scheduler queue while the spawning process continues. If you wanted to make the spawned process enter as quickly as possible, you could do something like this: start_child() -> Pid = spawn_opt( fun() -> ... end, [{priority, high}, link]), erlang:yield(), Pid. -- Ulf Wiger From valentin@REDACTED Wed Mar 1 13:49:30 2006 From: valentin@REDACTED (Valentin Micic) Date: Wed, 1 Mar 2006 14:49:30 +0200 Subject: Erlang: The Movie References: <655ACDFB-5860-4BA6-9475-F8D8E9E98CE8@citeulike.org> <44050EF1.1090008@charter.net> <44055433.9040609@fernuni-hagen.de> <004301c63d18$c52232a0$651069d4@segeltorp> <44057C7C.6030108@heh.uni-oldenburg.de> Message-ID: <019d01c63d2e$9cbab6f0$7309fea9@MONEYMAKER2> Guys, guys... you should've paid ransom... ;-) V. From ernie.makris@REDACTED Wed Mar 1 14:23:13 2006 From: ernie.makris@REDACTED (Ernie Makris) Date: Wed, 01 Mar 2006 08:23:13 -0500 Subject: cvs rtag -r branch -d yesterday my_branch_yesterday (workaround) In-Reply-To: References: <4404842E.7080902@comcast.net> Message-ID: <4405A041.3000903@comcast.net> I apply for clemency from the erlang gods! This is an outrage:) Luke Gorrie wrote: > Ernie Makris writes: > > >> Or you can switch to subversion:) >> > > Incitement to switch version control systems is a serious offense here > in the Workers' Republic of Synapse! You have been tried and sentenced > to death in absentia. > > > > From nils.muellner@REDACTED Wed Mar 1 14:28:42 2006 From: nils.muellner@REDACTED (=?windows-1252?Q?Nils_M=FCllner?=) Date: Wed, 01 Mar 2006 14:28:42 +0100 Subject: Erlang: The Movie In-Reply-To: <000801c63d32$3dc1ad80$0f1169d4@segeltorp> References: <655ACDFB-5860-4BA6-9475-F8D8E9E98CE8@citeulike.org> <44050EF1.1090008@charter.net> <44055433.9040609@fernuni-hagen.de> <004301c63d18$c52232a0$651069d4@segeltorp> <44057C7C.6030108@heh.uni-oldenburg.de> <000801c63d32$3dc1ad80$0f1169d4@segeltorp> Message-ID: <4405A18A.6030609@heh.uni-oldenburg.de> Bjarne D?cker schrieb: > >>> >>> >> i still wait for the wazuuuuuuuuuuuuuuup ;-) (in case you dont know, >> video.google.com for budweise advertisements!!!) >> nils >> > > Do you mean video.google or video.giggle ? > > Bjarne > > > try this one. if ill find the time maybe i can do a re-synchro of the erlang movie ;-) http://www.pocketmovies.net/subcat_18_0.html nils From ulf.wiger@REDACTED Wed Mar 1 14:44:01 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Wed, 1 Mar 2006 14:44:01 +0100 Subject: PIDs an priority Message-ID: Eduardo Figoli wrote: > > Do you know what does imply in ERTS making a process high priority. > I mean, for example 1000 child processes all of the them high > priority, will it be a time when the main process (the one > which spawns child processes) starts queueing messages? more > CPU power to handle thousands with good response times? http://www.erlang.org/ml-archive/erlang-questions/200104/msg00072.html If you end up with most processes running at high priority, you might as well bump them down to normal priority. Note that the 'max' and 'high' priority levels are strict. As long as processess are active at that level, lower priorities get no cpu time at all. /Ulf W From mike@REDACTED Wed Mar 1 17:25:27 2006 From: mike@REDACTED (Michael Williams) Date: 1 Mar 2006 16:25:27 GMT Subject: Erlang: The Movie References: Message-ID: In article , joe.armstrong@REDACTED (Joe Armstrong AL/EAB) writes: ---SNIP--- |> |> Note the differnce in my hair colour - all this Erlang stuff has |> made my hair go gray! It's the difference in my weight which worries me...... /mike From robert.virding@REDACTED Wed Mar 1 17:37:20 2006 From: robert.virding@REDACTED (Robert Virding) Date: Wed, 01 Mar 2006 17:37:20 +0100 Subject: Erlang: The Movie In-Reply-To: References: Message-ID: <4405CDC0.3050109@telia.com> Michael Williams skrev: > In article , > joe.armstrong@REDACTED (Joe Armstrong AL/EAB) writes: > ---SNIP--- > |> > |> Note the differnce in my hair colour - all this Erlang stuff has > |> made my hair go gray! > > It's the difference in my weight which worries me...... The trick is not too worry. Next year there is sure to be a study which proves that the extra weight is beneficial. Robert From raimo@REDACTED Wed Mar 1 18:23:33 2006 From: raimo@REDACTED (Raimo Niskanen) Date: 01 Mar 2006 18:23:33 +0100 Subject: Silent call trace Message-ID: I just discovered a queer behaviour in the call trace. The man page says, concerning erlang:trace/3, trace flags: silent: Used in conjunction with the call trace flag. Call tracing is active and match specs are exe- cuted as normal, but no call trace messages are generated. Silent mode is inhibited by executing erlang:trace/3 without the silent flag, or by a match spec executing the {silent, false} func- tion. It seems the actual implementation is that a regular call trace is not affected by the 'silent' flag. It is only when there is a match spec on the trace point that the trace message gets silenced. I.e an empty match spec list to erlang:trace_pattern/3 cause trace messages to ignore the 'silent' flag. This could either be regarded as a bug in the documentation because one might argue that the 'silent' flag has always been intended to co-operate with match specs and should therefore only affect match spec trace points, or, it could be regarded as a bug in the implementation that behaves differently if there is a match spec vs an empty one. Opinions? Has anyone built an application needing the existing behaviour? [ The second paragraph in the quote above from the manual is also incorrect: you have to use e.g erlang:trace(Pid, false, [silent]) to disable the 'silent' flag. ] -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From gronvald@REDACTED Wed Mar 1 22:19:29 2006 From: gronvald@REDACTED (Soren Gronvald) Date: Wed, 1 Mar 2006 13:19:29 -0800 (PST) Subject: http:request returns {error, session_remotly_closed} In-Reply-To: <86f1f5350602281810l6690a192u3730bcfc718bee52@mail.gmail.com> Message-ID: <20060301211929.40303.qmail@web32514.mail.mud.yahoo.com> Thanks for replies. I can make it work with ibrowse, but I still have the problem with http:request even when I do asynch. I found an earlier discussion from may 2005 about a similar problem: >I am not a 100 % sure but I think that your problem >could be related >to a pipeline bug that we fixed just recently. The >bug manifested >itself by returning {error, session_remotly_closed} for a request >that the manager wrongly tried to pipeline. >-- >/Ingela - OTP team http://www.erlang.org/ml-archive/erlang-questions/200505/msg00237.html Any one knows if this fix got into open source erlang, and what it was? Regards Gronvald --- Stephen Han wrote: > I ran your sample under Windows XP service pack 2 > with OTP R10B-9. > I could not reproduce the problem. > > One of my application do http:request every 5 > minutes. It has been doing > that since 2005 June. So far didn't see that error > since. But mine do > http:request every 5 min so yours probably do it > really fast without > interval. > > I think it may be the persistent connection that > confuses whoever that is, > http module or web server. :-) > > Try asynchronously and see if happens again. > > regards, > > > On 2/26/06, Soren Gronvald > wrote: > > > > hi, > > > > I have a problem with http:request that I think > could > > be a bug (but of course it could be due to > ignorance > > on my part). > > > > I am trying to read a url from yahoo.com in the > > simplest possible way - http:request(Url)- but it > > fails with the error > > > > {error,session_remotly_closed} > > > > This of course indicates that it is the server > that > > causes trouble, but I can read the same url with > other > > technologies (java and visual basic) with no > hassle at > > all. And I can read similar but shorter urls from > > yahaoo.com with erlang http:request without > problems. > > > > this url can be read successfully: > > U1 = > > " > > > http://finance.yahoo.com/d/quotes.csv?s=IBM,GE,GM&f=sl1d1t1c1baphgvn&e=.csv > > ". > > > > and this fails. > > U2 = > > " > > > http://finance.yahoo.com/d/quotes.csv?s=IBM,GE,GM,F,PKD,GW&f=sl1d1t1c1baphgvn&e=.csv > > ". > > > > both should return a comma separated list of stock > > exchange quotes from yahoo.com. > > > > As I can read the same url with other technologies > it > > makes me think that there is a bug in the http > client > > module. Or, could it be that I need some > > HTTPOptins/Options to make it work? > > > > I am using erlang R10B-9 on Windows XP SP2 > > professional edition with all updates applied. I > have > > downloaded the precompiled version from > erlang.org. > > > > Best regards > > Gronvald > > > > > > I have included program examples below > > > > this erlang program shows the problem > > > > % this works > > testurl(1) -> > > U = > > " > > > http://finance.yahoo.com/d/quotes.csv?s=IBM,GE,GM&f=sl1d1t1c1baphgvn&e=.csv > > ", > > geturl(U); > > > > % this exits > > testurl(2) -> > > U = > > " > > > http://finance.yahoo.com/d/quotes.csv?s=IBM,GE,GM,F,PKD,GW&f=sl1d1t1c1baphgvn&e=.csv > > ", > > geturl(U). > > > > geturl(Url) -> > > application:start(inets), > > X = http:request(Url), > > case X of > > {ok, { {_Version, 200, _ReasonPhrase}, _Headers, > > Body} } -> > > Body; > > _ -> X > > end. > > > > > > this java program will handle same situation > without > > error > > > > import java.net.*; > > import java.io.*; > > > > public class Geturl { > > > > public static void main(String[] args) throws > > Exception { > > > > String u2 = > > > > " > > > http://finance.yahoo.com/d/quotes.csv?s=IBM,GE,GM,F,PKD,GW&f=sl1d1t1c1baphgvn&e=.csv > > "; > > > > URL url = new URL(u2); > > InputStream is = new > > BufferedInputStream(url.openStream()); > > Reader r = new InputStreamReader(is); > > > > int c = r.read(); > > while(c != -1) { > > System.out.print((char)c); > > c = r.read(); > > } > > } > > } > > > > > > > > __________________________________________________ > > Do You Yahoo!? > > Tired of spam? Yahoo! Mail has the best spam > protection around > > http://mail.yahoo.com > > > __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From ryanobjc@REDACTED Thu Mar 2 07:22:05 2006 From: ryanobjc@REDACTED (Ryan Rawson) Date: Wed, 1 Mar 2006 22:22:05 -0800 Subject: Erlang & Hyperthreading In-Reply-To: <20060228130943.86727.qmail@web34401.mail.mud.yahoo.com> References: <78568af10602271647w3241e124lc9b6ac3d0b8effb1@mail.gmail.com> <20060228130943.86727.qmail@web34401.mail.mud.yahoo.com> Message-ID: <78568af10603012222v87a1b30u1dfc05e4c4695663@mail.gmail.com> In my circumstance, I run a mnesia database on every node. Each node answers questions from its local database. So running N nodes on a N-CPU/SMP system ends up with N copies of the database on 1 machine. That isn't the end of the world, since practically any Unix/Linux application on a 32 machine can't use more than 1.5 GB RAM, but the issue I'd be worried about is the additional communications overhead. This is part of the reason why I'm interested in the SMP-aware Erlang. The overall problem I'm trying to solve is one of building a service. As in "SOA" - service oriented architecture. -ryan On 2/28/06, Thomas Lindgren wrote: > > > --- Ryan Rawson wrote: > > > As for performance - surely on a multicpu system > > with a SMP aware > > erlang running, one can expect better performance > > just by using > > multiple kernel threads and gaining the ability to > > run on several > > CPUs? Actual performance on a single cpu, I > > wouldn't expect any > > performance benefit. > > There is the extra cost of locking access to shared > data inside the VM; contention between threads; > waiting for sequential, global background tasks like > memory management [parallel GCs exist, though; not > sure what OTP uses], etc. > > These problems are not specific to Erlang, naturally, > but do tend to turn up in most parallel applications, > particularly those that aren't designed to get around > it. Symptoms: First, a multithreaded application on a > single CPU will normally see a slowdown compared to > the single-threaded version (depending on hardware, > language and implementation, the precise number varies > widely). Second, contention and sequentialization > limits potential speedup, both relative to the > multithreaded and the (faster) single-threaded base. > > If the application permits it, you could also try a > less convenient and less flexible solution: running > several distributed erlang nodes on the same SMP host. > Since the nodes run at arm's length, the performance > trade off is different. > > Best, > Thomas > > > __________________________________________________ > Do You Yahoo!? > Tired of spam? Yahoo! Mail has the best spam protection around > http://mail.yahoo.com > From bengt.kleberg@REDACTED Thu Mar 2 07:55:14 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Thu, 02 Mar 2006 07:55:14 +0100 Subject: Longstanding issues: structs & standalone Erlang In-Reply-To: <200602231115.30534.rlenglet@users.forge.objectweb.org> References: <43FC2AD8.1090901@hyber.org> <200602222334.21229.mikael.karlsson@creado.com> <200602231115.30534.rlenglet@users.forge.objectweb.org> Message-ID: <440696D2.1050404@ericsson.com> On 2006-02-23 03:15, Romain Lenglet wrote: ...deleted > Here is the same misunderstanding again... > The proposal was *not* to use ./configure ; make ; make install > for deployment / installation, but for *building packages*, > which in turn can be deployed and installed. i am sorry, but i have trouble understanding your terminology. what is the difference between *building packages* and ''deployed and installed''? ...deleted > To make this possible, a common form for source packages provided > by developers must be defined, e.g. by including a configure > script and a Makefile in every source tarball. > (Fredrik, you are right when writing that the configure script > must not necessarily be generated by GNU Autoconf...) i belive that when you say that ''configure must not necessarily be generated by GNU Autoconf'' you still expect ''configure'' to behave exactly as if it had been generated by gnu autotools. correct? > Nothing should prevent a packager to take a source Erlang > application, to build it by running the provided configure > script and Makefile rules, and to create a user-installable > package using any "pure-Erlang" packaging system you want: in > this scenario, end users of the package would not have to > install anything else than Erlang. from what you have sofar written i think that you are creating a unneccessary distinction between ''packager'' and ''end users''. i belive that for some (pure) applications their needs are similar enough to treat them as one and the same. i also think you want to force application developers to use a gnu development system, to make things simpler for yourself. even if other users of the ''application'' would have a harder time because of this requierment. i could be wrong again. ...deleted > Yes, just like there is a point in having a pure Ruby system that > can deploy "pure" Ruby packages, and a pure Java system that can > deploy "pure" Java packages, and a pure Perl system that can > deploy "pure" Perl packages, and a pure Python system that can > deploy "pure" Python packages... > (Sorry for the ironic tone... ;-)) no need to excuse yourself, i see no irony. i have repeatedly come to the understanding that cpan is a reason for the success of perl. > I am just wondering how you can manage / upgrade applications on > a system with such a proliferation of incompatible packaging > systems? how about if they are not packaging systems? what if they are deployment systems? and they knows how to tailor the ''application'' for a particular hardware, os and installation directory? would it be such a nightmare for a ''professional packager'' to write the interface from his favored tool to these pure tools? it would only need to be done once for each tool. > Really, I am not against this idea. I just say that a pure Erlang > packaging system should not be imposed, and applications should > be delivered by developers in source form with a configure > script and a Makefile, and not in compiled form as a pure Erlang > installable package. i agree about not imposing, but i do not want to have a gnu system imposed. i think that a good erlang deployment system (for pure erlang applicaitons, etc) could handle uncompiled erlang. ...deleted > And even packages containing binary native code could be deployed > using a packaging system in pure Erlang, there is no problem > with that. > One issue would be to be able to have several versions of every > such package, one for every architecture / system, and the > system should be able to choose and install the right version on > a system. But this can be solved. i do not understand why it is neccessary to have several packages if the deploymenmt system can handle hardware, os and installation directory. bengt From bengt.kleberg@REDACTED Thu Mar 2 07:58:14 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Thu, 02 Mar 2006 07:58:14 +0100 Subject: Longstanding issues: structs & standalone Erlang In-Reply-To: <11498CB7D3FCB54897058DE63BE3897C013ED925@esealmw105.eemea.ericsson.se> References: <11498CB7D3FCB54897058DE63BE3897C013ED925@esealmw105.eemea.ericsson.se> Message-ID: <44069786.2010008@ericsson.com> On 2006-02-23 09:14, Vlad Dumitrescu XX (LN/EAB) wrote: ...deleted > I found an interesting tool for Java, called Macker > (http://innig.net/macker/), that I think would be nice to translate to > any language. The idea is to define the architecture in a file and let > the tool check if the code is structured accordingly or not, spitting > out errors and warnings. do you think it is ''better'' to have the rules arranged after allow/deny at the top level, or after modules at the top? bengt From vlad_dumitrescu@REDACTED Thu Mar 2 09:04:09 2006 From: vlad_dumitrescu@REDACTED (Vlad Dumitrescu) Date: Thu, 2 Mar 2006 09:04:09 +0100 Subject: Longstanding issues: structs & standalone Erlang In-Reply-To: <44069786.2010008@ericsson.com> Message-ID: Hi, My personal opinion is that by modules is better, because I handle a module at a time. It would also be easier to provide a small rules file per module (maybe even embedded in the source code?) But the rules could be viewed in a viewer that could show them in any grouping and sorting order. /Vlad > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Bengt Kleberg > Sent: Thursday, March 02, 2006 7:58 AM > To: erlang-questions@REDACTED > Subject: Re: Longstanding issues: structs & standalone Erlang > > On 2006-02-23 09:14, Vlad Dumitrescu XX (LN/EAB) wrote: > ...deleted > > I found an interesting tool for Java, called Macker > > (http://innig.net/macker/), that I think would be nice to > translate to > > any language. The idea is to define the architecture in a > file and let > > the tool check if the code is structured accordingly or > not, spitting > > out errors and warnings. > > do you think it is ''better'' to have the rules arranged > after allow/deny at the top level, or after modules at the top? > > > bengt > > From mats.cronqvist@REDACTED Thu Mar 2 09:30:28 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Thu, 02 Mar 2006 09:30:28 +0100 Subject: Silent call trace In-Reply-To: References: Message-ID: <4406AD24.5010107@ericsson.com> one has to wonder if this has ever been used by anyone (i know i've never used it)... the documentation seems more reasonable than the implementation to me. mats Raimo Niskanen wrote: > I just discovered a queer behaviour in the call trace. > The man page says, concerning erlang:trace/3, trace flags: > > silent: > Used in conjunction with the call trace flag. > Call tracing is active and match specs are exe- > cuted as normal, but no call trace messages are > generated. > > Silent mode is inhibited by executing > erlang:trace/3 without the silent flag, or by a > match spec executing the {silent, false} func- > tion. > > It seems the actual implementation is that a regular call > trace is not affected by the 'silent' flag. It is only when > there is a match spec on the trace point that the trace > message gets silenced. I.e an empty match spec list to > erlang:trace_pattern/3 cause trace messages to ignore > the 'silent' flag. > > This could either be regarded as a bug in the documentation > because one might argue that the 'silent' flag has always > been intended to co-operate with match specs and should > therefore only affect match spec trace points, > or, > it could be regarded as a bug in the implementation that > behaves differently if there is a match spec vs an empty one. > > Opinions? > > Has anyone built an application needing the existing behaviour? > > > [ The second paragraph in the quote above from > the manual is also incorrect: you have to use > e.g erlang:trace(Pid, false, [silent]) > to disable the 'silent' flag. ] > From bertil.karlsson@REDACTED Thu Mar 2 10:03:37 2006 From: bertil.karlsson@REDACTED (Bertil Karlsson) Date: Thu, 02 Mar 2006 10:03:37 +0100 Subject: http:request returns {error, session_remotly_closed} In-Reply-To: <20060301211929.40303.qmail@web32514.mail.mud.yahoo.com> References: <20060301211929.40303.qmail@web32514.mail.mud.yahoo.com> Message-ID: <4406B4E9.8040606@ericsson.com> There is a debug option you can use on the client, so far undocumented and a bit ugly printouts, but you will get something that may help debugging. It prints what the client sends and receives. Before your request do: http:set_options([verbose]). /Bertil Soren Gronvald wrote: >Thanks for replies. > >I can make it work with ibrowse, but I still have the >problem with http:request even when I do asynch. > >I found an earlier discussion from may 2005 about a >similar problem: > > > >>I am not a 100 % sure but I think that your problem >>could be related >>to a pipeline bug that we fixed just recently. The >>bug manifested >>itself by returning {error, session_remotly_closed} >> >> >for a request > > >>that the manager wrongly tried to pipeline. >> >> > > > >>-- >>/Ingela - OTP team >> >> > > >http://www.erlang.org/ml-archive/erlang-questions/200505/msg00237.html > >Any one knows if this fix got into open source erlang, >and what it was? > >Regards >Gronvald > > >--- Stephen Han wrote: > > > >>I ran your sample under Windows XP service pack 2 >>with OTP R10B-9. >>I could not reproduce the problem. >> >>One of my application do http:request every 5 >>minutes. It has been doing >>that since 2005 June. So far didn't see that error >>since. But mine do >>http:request every 5 min so yours probably do it >>really fast without >>interval. >> >>I think it may be the persistent connection that >>confuses whoever that is, >>http module or web server. :-) >> >>Try asynchronously and see if happens again. >> >>regards, >> >> >>On 2/26/06, Soren Gronvald >>wrote: >> >> >>>hi, >>> >>>I have a problem with http:request that I think >>> >>> >>could >> >> >>>be a bug (but of course it could be due to >>> >>> >>ignorance >> >> >>>on my part). >>> >>>I am trying to read a url from yahoo.com in the >>>simplest possible way - http:request(Url)- but it >>>fails with the error >>> >>>{error,session_remotly_closed} >>> >>>This of course indicates that it is the server >>> >>> >>that >> >> >>>causes trouble, but I can read the same url with >>> >>> >>other >> >> >>>technologies (java and visual basic) with no >>> >>> >>hassle at >> >> >>>all. And I can read similar but shorter urls from >>>yahaoo.com with erlang http:request without >>> >>> >>problems. >> >> >>>this url can be read successfully: >>>U1 = >>>" >>> >>> >>> >http://finance.yahoo.com/d/quotes.csv?s=IBM,GE,GM&f=sl1d1t1c1baphgvn&e=.csv > > >>>". >>> >>>and this fails. >>>U2 = >>>" >>> >>> >>> >http://finance.yahoo.com/d/quotes.csv?s=IBM,GE,GM,F,PKD,GW&f=sl1d1t1c1baphgvn&e=.csv > > >>>". >>> >>>both should return a comma separated list of stock >>>exchange quotes from yahoo.com. >>> >>>As I can read the same url with other technologies >>> >>> >>it >> >> >>>makes me think that there is a bug in the http >>> >>> >>client >> >> >>>module. Or, could it be that I need some >>>HTTPOptins/Options to make it work? >>> >>>I am using erlang R10B-9 on Windows XP SP2 >>>professional edition with all updates applied. I >>> >>> >>have >> >> >>>downloaded the precompiled version from >>> >>> >>erlang.org. >> >> >>>Best regards >>>Gronvald >>> >>> >>>I have included program examples below >>> >>>this erlang program shows the problem >>> >>>% this works >>>testurl(1) -> >>>U = >>>" >>> >>> >>> >http://finance.yahoo.com/d/quotes.csv?s=IBM,GE,GM&f=sl1d1t1c1baphgvn&e=.csv > > >>>", >>>geturl(U); >>> >>>% this exits >>>testurl(2) -> >>>U = >>>" >>> >>> >>> >http://finance.yahoo.com/d/quotes.csv?s=IBM,GE,GM,F,PKD,GW&f=sl1d1t1c1baphgvn&e=.csv > > >>>", >>>geturl(U). >>> >>>geturl(Url) -> >>>application:start(inets), >>>X = http:request(Url), >>>case X of >>>{ok, { {_Version, 200, _ReasonPhrase}, _Headers, >>>Body} } -> >>> Body; >>>_ -> X >>>end. >>> >>> >>>this java program will handle same situation >>> >>> >>without >> >> >>>error >>> >>>import java.net.*; >>>import java.io.*; >>> >>>public class Geturl { >>> >>>public static void main(String[] args) throws >>>Exception { >>> >>>String u2 = >>> >>>" >>> >>> >>> >http://finance.yahoo.com/d/quotes.csv?s=IBM,GE,GM,F,PKD,GW&f=sl1d1t1c1baphgvn&e=.csv > > >>>"; >>> >>>URL url = new URL(u2); >>>InputStream is = new >>>BufferedInputStream(url.openStream()); >>>Reader r = new InputStreamReader(is); >>> >>>int c = r.read(); >>>while(c != -1) { >>> System.out.print((char)c); >>> c = r.read(); >>>} >>>} >>>} >>> >>> >>> >>>__________________________________________________ >>>Do You Yahoo!? >>>Tired of spam? Yahoo! Mail has the best spam >>> >>> >>protection around >> >> >>>http://mail.yahoo.com >>> >>> >>> > > >__________________________________________________ >Do You Yahoo!? >Tired of spam? Yahoo! Mail has the best spam protection around >http://mail.yahoo.com > > From rlenglet@REDACTED Thu Mar 2 10:58:32 2006 From: rlenglet@REDACTED (Romain Lenglet) Date: Thu, 2 Mar 2006 18:58:32 +0900 Subject: Longstanding issues: structs & standalone Erlang In-Reply-To: <440696D2.1050404@ericsson.com> References: <200602231115.30534.rlenglet@users.forge.objectweb.org> <440696D2.1050404@ericsson.com> Message-ID: <200603021858.32638.rlenglet@users.forge.objectweb.org> > > Here is the same misunderstanding again... > > The proposal was *not* to use ./configure ; make ; make > > install for deployment / installation, but for *building > > packages*, which in turn can be deployed and installed. > > i am sorry, but i have trouble understanding your terminology. > what is the difference between *building packages* and > ''deployed and installed''? On Debian, building a package means: 1- taking the sources provided by a developer, 2- apply some patches, e.g. to have files installed in the right location according to the Debian policy, to write manpages to programs that don't have one, to add shell scripts to start applications (e.g. Java applications often don't come with appropriate shell scripts to start them), etc. 3- define packaging-specific files, e.g. on Debian: - the control file specify what "binary" packages will be generated from that single "source" package, and what are the dependencies of the source package (for building it), and of every binary package (for installing and using it). - the packaging changelog (which has a strict format, because it is interpreted by tools to get the package version and packager name) - the copyright file (which follows some conventions) - the rules file, which is in fact a Makefile which rules must have standard names (build, install, etc.): usually, these rules simply call make for the developer-provided Makefile's rules - etc. 4- run the package's building Makefile rules: this generates the installable .deb package files for the current architecture (e.g. i386) and a source package tarball, and digitally signs the files, 5- upload the .deb and source package files into the official repository (or a private repository) using Debian's tools (dput, etc.) 6- if uploaded into the official Debian repository: - the architecture-specific binary packages are automatically built (compiled, etc. cf. step 4) for the 10+ hardware architectures supported by Debian, from the source package tarball, and any build bug is reported to the packager - any bug that is declared to be closed in the source package's changelog are automatically closed in Debian's Bug Tracking System - etc. Being a Debian packager also means tracking bug reports (sometimes related only to packaging), tracking changes in packages we depend on, changes in the Debian policy, etc. And be an interface between Debian users and upstream developers (to submit patches, etc.). Then, people can use the Debian commands (apt-get...) to automatically download the .deb files from the repositories and install, or upgrade, or uninstall them. Installation typically takes one command, e.g.: $ apt-get install gtknode > > To make this possible, a common form for source packages > > provided by developers must be defined, e.g. by including a > > configure script and a Makefile in every source tarball. > > (Fredrik, you are right when writing that the configure > > script must not necessarily be generated by GNU Autoconf...) > > i belive that when you say that ''configure must not > necessarily be generated by GNU Autoconf'' you still expect > ''configure'' to behave exactly as if it had been generated by > gnu autotools. correct? Yes and no. In fact the basic "interface" provided by a configure script is described in the GNU Coding Standards: http://www.gnu.org/prep/standards/html_node/Configuration.html#Configuration Autotoconf-generated configures provide more options. We could specify our own subset or superset of those options. For instance, the --prefix option provided by Autoconf-generated configures is very useful, as Fredrik pointed out. Because normally a source package should copy files into /usr/local on Unix, but Debian (and any other Linux distribution, I guess) requires packages to install into /usr. Then, it is very usual to specify --prefix=/usr to the configure scripts. > > Nothing should prevent a packager to take a source Erlang > > application, to build it by running the provided configure > > script and Makefile rules, and to create a user-installable > > package using any "pure-Erlang" packaging system you want: > > in this scenario, end users of the package would not have to > > install anything else than Erlang. > > from what you have sofar written i think that you are creating > a unneccessary distinction between ''packager'' and ''end > users''. i belive that for some (pure) applications their > needs are similar enough to treat them as one and the same. This distinction exists, believe it or not. Perhaps you don't see it on the systems that you use, but it exists. I am a Debian user, and as an end user the only thing that I accept to do to install an application is executing: $ apt-get install application (idem for FreeBSD users, etc. of course) If packagers do not exist, then who creates the .deb files that I install? Just take a look at a well-known application: Unison. Unison is a "pure" OCaml application. http://www.cis.upenn.edu/~bcpierce/unison/download.html It is distributed only in source form (as a tarball) by its developer. All packages, for many packaging systems (Fink, FreeBSD, Debian, etc.) are made by different packagers, who are specialists of their packaging systems. As you can see, there is a language-specific packaging system for OCaml applications, called GODI. But the Unison GODI package is made by just another packager, and Unison is not provided by the Unison developer itself as a GODI package but as a source tarball because it would make packaging difficult for all the other packagers. And "everybody is happy": the developer does not have to matter for packaging issues, packagers get sources from the developer in a packaging-friendly form (tarball with Makefile), and end-users have packages readilly installable on their system (be it MacOS X, FreeBSD, Debian, etc.). In the case of Unison, there is no configure script in the source tarball: configuration is mixed up with the Makefile, which I think is a bad idea. But this is still friendly to packagers. > i also think you want to force application developers to use a > gnu development system, to make things simpler for yourself. Of course, I try to make things simpler for myself. ;-) > even if other users of the ''application'' would have a harder > time because of this requierment. i could be wrong again. I am not particularly attached to GNU tools. I want to have a common, general, simple interface between Erlang application developers and packagers. In the Unix world, this interface is ./configure ; make ; make install. This is a starting point for discussion. The good point about ./configure ; make ; make install is that it makes no assumption about packaging: it deals with configuration and building *only*. And any developer-packager interface should limit to that also. > > Yes, just like there is a point in having a pure Ruby system > > that can deploy "pure" Ruby packages, and a pure Java system > > that can deploy "pure" Java packages, and a pure Perl system > > that can deploy "pure" Perl packages, and a pure Python > > system that can deploy "pure" Python packages... > > (Sorry for the ironic tone... ;-)) > > no need to excuse yourself, i see no irony. i have repeatedly > come to the understanding that cpan is a reason for the > success of perl. If tomorrow Debian stopped packaging Perl modules, and I were forced to use CPAN to install Perl modules, I would simply stop using Perl application. CPAN is not end-user friendly, because it interferes with the OS's packaging system (when there is one, of course). > > I am just wondering how you can manage / upgrade > > applications on a system with such a proliferation of > > incompatible packaging systems? > > how about if they are not packaging systems? what if they are > deployment systems? I do not distinguish between packaging and deployment systems: I think that they are synonymous. > and they knows how to tailor the > ''application'' for a particular hardware, os and installation > directory? > would it be such a nightmare for a ''professional packager'' > to write the interface from his favored tool to these pure > tools? it would only need to be done once for each tool. It depends... It can really be a nightmare if those tools are not designed as a developer-packager interface. First, it would be a nightmare if those pure tools do anything else than configuring and building (e.g. if they try to download code at build time, or try to install files anywhere...). > > Really, I am not against this idea. I just say that a pure > > Erlang packaging system should not be imposed, and > > applications should be delivered by developers in source > > form with a configure script and a Makefile, and not in > > compiled form as a pure Erlang installable package. > > i agree about not imposing, but i do not want to have a gnu > system imposed. OK. :-) ./configure --prefix=... ; ?make ; make install DESTDIR=... is a starting point for specifying a configuration and building interface. And this does not have to be GNU-specific. I am not even against the idea of having the equivalent of the configure scripts and Makefiles implemented in Erlang. > i think that a good erlang deployment system > (for pure erlang applicaitons, etc) could handle uncompiled > erlang. Sure. Only, Debian packagers and buildit packagers, etc. should not have to deal with it. Developers should not provide source code in a form that depends on that deployment system (or any other packaging or deployment system, for that matters). > > And even packages containing binary native code could be > > deployed using a packaging system in pure Erlang, there is > > no problem with that. > > One issue would be to be able to have several versions of > > every such package, one for every architecture / system, and > > the system should be able to choose and install the right > > version on a system. But this can be solved. > > i do not understand why it is neccessary to have several > packages if the deploymenmt system can handle hardware, os and > installation directory. In the scenario I described just above, there would have been one package containing binary code for i386/linux, one for i386/win32, one for ppc/darwin, one for alpha/linux, etc. "if the deploymenmt system can handle hardware, os", it only means that it can select the right package to install (a package being a unit of installation). -- Romain LENGLET From robert.virding@REDACTED Thu Mar 2 11:16:43 2006 From: robert.virding@REDACTED (Robert Virding) Date: Thu, 02 Mar 2006 11:16:43 +0100 Subject: optimization of list comprehensions In-Reply-To: References: Message-ID: <4406C60B.3030902@telia.com> I have been thinking about this a bit and I wonder if the constructing of the return list really causes any problems. Space-wise it will only be as large as the size of the input list, so I wouldn't worry. Robert Ulf Wiger (AL/EAB) skrev: > I've seen many examples of how people use > list comprehensions as a form of beautified > lists:foreach() - that is, they don't care > about the return value of the comprehension. > > I hesitate to say that it's bad practice, > even though one will build a potentially > large list unnecessarily, since it's > actually looks a lot nicer than using > lists:foreach(). > > Question: would it be in some way hideous > to introduce an optimization where such > list comprehensions do everything except > actually build the list? Then they could > execute in constant space. > > /Ulf W > From rrerlang@REDACTED Thu Mar 2 11:29:14 2006 From: rrerlang@REDACTED (Robert Raschke) Date: Thu, 2 Mar 2006 10:29:14 +0000 Subject: soap? Message-ID: <64ff89983f9450b20a630fe807d2707d@tombob.com> Hi, I am currently looking for an easy way of providing a SOAP Service and also acting as a client to another SOAP Service. I have had a quick look at erlsoap-0.3, but am struggling to figure out how to make use of it. Does anyone have any pointers on how I could reasonably quickly set up a trivial SOAP server and also issue SOAP request to a foreign Service. I am looking for something really trivial like adding two numbers, to allow me to see the forest, instead of getting confused by all the trees. I have already had a look at using IIS and .NET, where it is trivial to set up a SOAP Service, but impossible to make use of it due to MS insisting on using '100 Continue' returns every'bleedin'where. Thanks for any pointers. Robby From erik.reitsma@REDACTED Thu Mar 2 11:31:31 2006 From: erik.reitsma@REDACTED (Erik Reitsma (RY/ETM)) Date: Thu, 2 Mar 2006 11:31:31 +0100 Subject: optimization of list comprehensions Message-ID: <110BA8ACEE682C479D0B008B6BE4AEB101F45593@esealmw107.eemea.ericsson.se> If the return value would be large, you would not want it in the list. But then you could still do: 1> [ begin io:format("~w~n",[X]), empty end || X <- [1,2,3,4,5] ]. 1 2 3 4 5 [empty,empty,empty,empty,empty] 2> This way the return value of the function with side effect is not added to the list. I think it is a bit ugly, though. *Erik. Robert wrote: > I have been thinking about this a bit and I wonder if the > constructing of the return list really causes any problems. > Space-wise it will only be as large as the size of the input > list, so I wouldn't worry. > > Robert > > Ulf Wiger (AL/EAB) skrev: > > I've seen many examples of how people use list comprehensions as a > > form of beautified > > lists:foreach() - that is, they don't care about the return > value of > > the comprehension. > > > > I hesitate to say that it's bad practice, even though one > will build a > > potentially large list unnecessarily, since it's actually > looks a lot > > nicer than using lists:foreach(). > > > > Question: would it be in some way hideous to introduce an > optimization > > where such list comprehensions do everything except > actually build the > > list? Then they could execute in constant space. > > > > /Ulf W > > > From mats.cronqvist@REDACTED Thu Mar 2 12:30:37 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Thu, 02 Mar 2006 12:30:37 +0100 Subject: optimization of list comprehensions In-Reply-To: <4406C60B.3030902@telia.com> References: <4406C60B.3030902@telia.com> Message-ID: <4406D75D.5060805@ericsson.com> what about this then? [foo ! x(X,Y,Z) || X <- [a,b,c], Y<-[d,e,f], Z<-[g,h,i]]. but my real problem with running side-effect-only code in list comprehensions is that it's counter-intuitive. i use a list constructor (because of the nice syntax) even though i don't actually want a list. it is also a bit worrying that you have to be a vm guru to be able to figure out if it's space-safe or not. mats Robert Virding wrote: > I have been thinking about this a bit and I wonder if the constructing > of the return list really causes any problems. Space-wise it will only > be as large as the size of the input list, so I wouldn't worry. > > Robert > > Ulf Wiger (AL/EAB) skrev: > >> I've seen many examples of how people use >> list comprehensions as a form of beautified >> lists:foreach() - that is, they don't care >> about the return value of the comprehension. >> >> I hesitate to say that it's bad practice, >> even though one will build a potentially >> large list unnecessarily, since it's actually looks a lot nicer than >> using lists:foreach(). >> >> Question: would it be in some way hideous >> to introduce an optimization where such >> list comprehensions do everything except >> actually build the list? Then they could >> execute in constant space. >> >> /Ulf W >> From surindar.shanthi@REDACTED Thu Mar 2 13:56:26 2006 From: surindar.shanthi@REDACTED (Surindar Sivanesan) Date: Thu, 2 Mar 2006 18:26:26 +0530 Subject: OTP tree doubts Message-ID: <42ea5fb60603020456u2932b8d4qc60dbab06f0787fa@mail.gmail.com> Hi all, I have some doubts in OTP design. 1.In gen_event behaviour, it is mentioned in the document that, start_link() function creates an event manager process as part of a supervision tree.Butin that function there is no parameter of supervisor reference. In case of gen_server or gen_fsm behaviour, the start_link function has the supervisor reference as parameter. I'm able to understand gen_fsm and gen_server but gen_event is still confusing. If there is any example applying gen_event, please give me. 2.I can create a thread which continuously running in a loop ex: -module(sample). start()-> spawn(sample,continuous,[]). continuous()-> %%Functionality is done here continuous(). whether the same type of thread is implemented as child like gen_fsm, gen_server, gen_event in OTP tree. Please clarify me -- with regards, S.Surindar -------------- next part -------------- An HTML attachment was scrubbed... URL: From Lennart.Ohman@REDACTED Thu Mar 2 14:13:13 2006 From: Lennart.Ohman@REDACTED (=?iso-8859-1?Q?Lennart_=D6hman?=) Date: Thu, 2 Mar 2006 14:13:13 +0100 Subject: SV: OTP tree doubts References: <42ea5fb60603020456u2932b8d4qc60dbab06f0787fa@mail.gmail.com> Message-ID: Hi Surindar, I am not sure of what you refer to as there being a reference to a supervisor in for instance gen_server:start_link, but non in gen_event:start_link. There is infact no reference to the supervisor in gen_server:start_link either. The relationship between the supervisor and its newly created child process is established by the gen_server or gen_event code automatically. And of course the fact that a client function calling the start_link function is called from the the supervisor in questions. A supervisor is made to call the clientfunction starting the new child by placing it in its children specification. Have you taken a look at the Working with OTP chapters in the online documentation? There is for instance an example on how to start an event-manager process using gen_event. http://www.erlang.org/doc/doc-5.4.12/doc/design_principles/part_frame.html Best Regards, Lennart ------------------------------------------------------------- Lennart Ohman phone : +46-8-587 623 27 Sj?land & Thyselius Telecom AB cellular: +46-70-552 6735 Sehlstedtsgatan 6 fax : +46-8-667 8230 SE-115 28 STOCKHOLM, SWEDEN email : lennart.ohman@REDACTED ________________________________ Fr?n: owner-erlang-questions@REDACTED genom Surindar Sivanesan Skickat: to 2006-03-02 13:56 Till: erlang-questions@REDACTED ?mne: OTP tree doubts Hi all, I have some doubts in OTP design. 1.In gen_event behaviour, it is mentioned in the document that, start_link() function creates an event manager process as part of a supervision tree.But in that function there is no parameter of supervisor reference. In case of gen_server or gen_fsm behaviour, the start_link function has the supervisor reference as parameter. I'm able to understand gen_fsm and gen_server but gen_event is still confusing. If there is any example applying gen_event, please give me. 2.I can create a thread which continuously running in a loop ex: -module(sample). start()-> spawn(sample,continuous,[]). continuous()-> %%Functionality is done here continuous(). whether the same type of thread is implemented as child like gen_fsm, gen_server, gen_event in OTP tree. Please clarify me -- with regards, S.Surindar -------------- next part -------------- An HTML attachment was scrubbed... URL: From serge@REDACTED Thu Mar 2 14:36:44 2006 From: serge@REDACTED (Serge Aleynikov) Date: Thu, 02 Mar 2006 08:36:44 -0500 Subject: optimization of list comprehensions In-Reply-To: <4406C60B.3030902@telia.com> References: <4406C60B.3030902@telia.com> Message-ID: <4406F4EC.2060706@hq.idt.net> I do want to throw a vote for Mats' suggestion on the alternative syntax: (I || I <- List) -> ok What I also find limiting is that it's not possible to have an accumulator when using list comprehension. Perhaps something like this could also be considered (unless someone can suggest a better syntax): [Acc+1, I || Acc, I <- List](0) -> Acc1 ^ | Initial Acc's value Serge Robert Virding wrote: > I have been thinking about this a bit and I wonder if the constructing > of the return list really causes any problems. Space-wise it will only > be as large as the size of the input list, so I wouldn't worry. > > Robert > > Ulf Wiger (AL/EAB) skrev: > >> I've seen many examples of how people use >> list comprehensions as a form of beautified >> lists:foreach() - that is, they don't care >> about the return value of the comprehension. >> >> I hesitate to say that it's bad practice, >> even though one will build a potentially >> large list unnecessarily, since it's actually looks a lot nicer than >> using lists:foreach(). >> >> Question: would it be in some way hideous >> to introduce an optimization where such >> list comprehensions do everything except >> actually build the list? Then they could >> execute in constant space. >> >> /Ulf W >> > From mats.cronqvist@REDACTED Thu Mar 2 15:03:15 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Thu, 02 Mar 2006 15:03:15 +0100 Subject: optimization of list comprehensions In-Reply-To: <4406F4EC.2060706@hq.idt.net> References: <4406C60B.3030902@telia.com> <4406F4EC.2060706@hq.idt.net> Message-ID: <4406FB23.4080802@ericsson.com> indeed. i tend to use lists:foldl, lists:map and lists:foreach a lot. i would really like to have comprehension syntax for all three of them. my suggestion for the foldl-like comprehension is (x(I,'_') || I <- List, '_' <- Acc0) -> Acc1 where '_' is the accumulator. it would shadow a previously bound '_'. the foreach comprehension (x(I) || I <- List) -> X (where X is x(L) and L is the last element of List) would be a special case of this. all real language designers should feel free to flame me. it'll be educational. mats Serge Aleynikov wrote: > I do want to throw a vote for Mats' suggestion on the alternative syntax: > > (I || I <- List) -> ok > > What I also find limiting is that it's not possible to have an > accumulator when using list comprehension. Perhaps something like this > could also be considered (unless someone can suggest a better syntax): > > [Acc+1, I || Acc, I <- List](0) -> Acc1 > ^ > | > Initial Acc's value > Serge > > Robert Virding wrote: > >> I have been thinking about this a bit and I wonder if the constructing >> of the return list really causes any problems. Space-wise it will only >> be as large as the size of the input list, so I wouldn't worry. >> >> Robert >> >> Ulf Wiger (AL/EAB) skrev: >> >>> I've seen many examples of how people use >>> list comprehensions as a form of beautified >>> lists:foreach() - that is, they don't care >>> about the return value of the comprehension. >>> >>> I hesitate to say that it's bad practice, >>> even though one will build a potentially >>> large list unnecessarily, since it's actually looks a lot nicer than >>> using lists:foreach(). >>> >>> Question: would it be in some way hideous >>> to introduce an optimization where such >>> list comprehensions do everything except >>> actually build the list? Then they could >>> execute in constant space. >>> >>> /Ulf W >>> >> > From per.gustafsson@REDACTED Thu Mar 2 16:27:57 2006 From: per.gustafsson@REDACTED (Per Gustafsson) Date: Thu, 02 Mar 2006 16:27:57 +0100 Subject: optimization of list comprehensions In-Reply-To: <4406FB23.4080802@ericsson.com> References: <4406C60B.3030902@telia.com> <4406F4EC.2060706@hq.idt.net> <4406FB23.4080802@ericsson.com> Message-ID: <44070EFD.9060605@it.uu.se> Hi Mats Do I understand your construction correctly if I describe it like this: A fold-expression have the following syntax: (expr(X1,..XN,Y1,...,YM) || PatX <- List, PatY <-- Acc0) Where X1,...,XN are variables from PatX and Y1,...,YN are variables from PatY, List is the list to fold over and Acc0 is the initial accumulator. This can be translated into the following code using lists:foldl: lists:foldl(fun(PatX,PatY) -> expr(X1,...,XN,Y1,...,YM) end, List, Acc0) An example to calculate the sum of a list of two-tuples: two_tuple_sum(List) -> ({X+XAcc,Y+YAcc} || {X,Y} <- List, {XAcc,YAcc} <-- {0,0}) It is trivial to add filters to this construction as well. It is less clear to me how it would work with multiple generators. To get mapfoldl would be a little bit trickier (less natural) but the following syntax could do the trick: [expr(X1,..XN,Y1,...,YM) || PatX <- List, PatY <-- Acc0] where expr(X1,..XN,Y1,...,YM) returns a two-tuple and the whole expression returns a two-tuple. (Which is unfortunate as it seems to return a list) This can be translated into the following code using lists:mapfoldl: lists:mapfoldl(fun(PatX,PatY) -> expr(X1,...,XN,Y1,...,YM) end, List, Acc0) An example to calculate the sum of a list and decrease each value in the list by one: sum_and_dec(List) -> [{X-1,X+Acc} || X<-List Acc <-- 0] Of course to get the foldr, mapfoldr versions the list generator should be written: List -> PatX :) I feel that the issue with adding (too many) of these kinds of constructs are that they tend to make the language harder to read because it becomes difficult to see the difference between the different constructs that look quite similar. Another issue is that it probably would be quite difficult to convince the parser to accept this. Per Gustafsson Mats Cronqvist wrote: > indeed. i tend to use lists:foldl, lists:map and lists:foreach a lot. > i would really like to have comprehension syntax for all three of them. > > my suggestion for the foldl-like comprehension is > > (x(I,'_') || I <- List, '_' <- Acc0) -> Acc1 > > where '_' is the accumulator. it would shadow a previously bound '_'. > the foreach comprehension > > (x(I) || I <- List) -> X > > (where X is x(L) and L is the last element of List) would be a special > case of this. > > all real language designers should feel free to flame me. it'll be > educational. > > mats > > Serge Aleynikov wrote: > >> I do want to throw a vote for Mats' suggestion on the alternative syntax: >> >> (I || I <- List) -> ok >> >> What I also find limiting is that it's not possible to have an >> accumulator when using list comprehension. Perhaps something like >> this could also be considered (unless someone can suggest a better >> syntax): >> >> [Acc+1, I || Acc, I <- List](0) -> Acc1 >> ^ >> | >> Initial Acc's value >> Serge >> >> Robert Virding wrote: >> >>> I have been thinking about this a bit and I wonder if the >>> constructing of the return list really causes any problems. >>> Space-wise it will only be as large as the size of the input list, so >>> I wouldn't worry. >>> >>> Robert >>> >>> Ulf Wiger (AL/EAB) skrev: >>> >>>> I've seen many examples of how people use >>>> list comprehensions as a form of beautified >>>> lists:foreach() - that is, they don't care >>>> about the return value of the comprehension. >>>> >>>> I hesitate to say that it's bad practice, >>>> even though one will build a potentially >>>> large list unnecessarily, since it's actually looks a lot nicer than >>>> using lists:foreach(). >>>> >>>> Question: would it be in some way hideous >>>> to introduce an optimization where such >>>> list comprehensions do everything except >>>> actually build the list? Then they could >>>> execute in constant space. >>>> >>>> /Ulf W >>>> >>> >> From ok@REDACTED Thu Mar 2 23:11:37 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Fri, 3 Mar 2006 11:11:37 +1300 (NZDT) Subject: optimization of list comprehensions Message-ID: <200603022211.k22MBb8n267351@atlas.otago.ac.nz> Serge Aleynikov wrote: I do want to throw a vote for Mats' suggestion on the alternative syntax: (I || I <- List) -> ok What I also find limiting is that it's not possible to have an accumulator when using list comprehension. Perhaps something like this could also be considered (unless someone can suggest a better syntax): [Acc+1, I || Acc, I <- List](0) -> Acc1 ^ | Initial Acc's value EEEK! I started watching this thread with the belief that the occasional small abuse of list comprehension syntax was OK. This suggestion has finally convinced me otherwise. I am not at all happy about (Expr || Pat <- List) because (A) it looks like a syntax error; it's quite likely to be 'corrected' to use square brackets (B) it reduces the parser's ability to diagnose syntax errors (C) nothing about it visually suggests the most important piece of information, which is that the expression is executed for side effects, not for its value. The idea of explicitly writing "_ = [Expr || Pat <- List]" has none of those defects. The only compiler change that's called for is to emit a warning when the value of a list comprehension *isn't* used in some way (including _= as a use). As for combining accumulation with list comprehension, one can already write lists:foldl(fun (X,Y) -> X+Y end, 0, [some list comprehension goes here]) which isn't *that* bad. Any more direct syntax (such as something based on the Lisp 'do' construct, for example) would have to involve some way of presenting two bindings for the same names in the same construct (as 'do' does), which is not a very Erlangish thing to do. From mats.cronqvist@REDACTED Fri Mar 3 10:33:32 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Fri, 03 Mar 2006 10:33:32 +0100 Subject: optimization of list comprehensions In-Reply-To: <44070EFD.9060605@it.uu.se> References: <4406C60B.3030902@telia.com> <4406F4EC.2060706@hq.idt.net> <4406FB23.4080802@ericsson.com> <44070EFD.9060605@it.uu.se> Message-ID: <44080D6C.3000807@ericsson.com> Per Gustafsson wrote: > Hi Mats > > Do I understand your construction correctly if I describe it like this: > > A fold-expression have the following syntax: > > (expr(X1,..XN,Y1,...,YM) || PatX <- List, PatY <-- Acc0) > > Where X1,...,XN are variables from PatX and Y1,...,YN are variables from > PatY, List is the list to fold over and Acc0 is the initial accumulator. no, but i think yours is nicer :> what i had in mind was to have a reserved variable '_' (scoped inside the comprehension) that would hold the value of the expression. this to avoid introducing the <-- operator. i wasn't very happy with it (to perl-y). e.g. the (unfortunately non-existing) string:join/2 function; string:join(["a","b","c"],"/") -> "a/b/c" can be implemented like this; string:join([Pref|Toks],Sep) -> lists:foldl(fun(T,Acc) -> [Acc,Sep|T] end, Pref, Toks). with my construction it would become (['_',Sep|T] || T <- Toks, '_' <- Pref). in yours it would be (right?); ([P,Sep|T] || T <- Toks, P <-- Pref). [...] > I feel that the issue with adding (too many) of these kinds of > constructs are that they tend to make the language harder to read > because it becomes difficult to see the difference between the different > constructs that look quite similar. yes, of course. but it seems to me that there are only two basic patterns here; the map-like and the fold-like. the Erlang Reference Manual says; "[List comprehensions] provide a succinct notation for generating elements in a list." I want a similarrly succinct notation for folding over a list. i realize i'm not the right person to decide what the syntax should look like. > Another issue is that it probably > would be quite difficult to convince the parser to accept this. i'll take your word for that. of course, from my ivory tower that's an implementation detail :> mats From mats.cronqvist@REDACTED Fri Mar 3 10:53:19 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Fri, 03 Mar 2006 10:53:19 +0100 Subject: optimization of list comprehensions In-Reply-To: <200603022211.k22MBb8n267351@atlas.otago.ac.nz> References: <200603022211.k22MBb8n267351@atlas.otago.ac.nz> Message-ID: <4408120F.4010508@ericsson.com> Richard A. O'Keefe wrote: [...] > I am not at all happy about (Expr || Pat <- List) because > (A) it looks like a syntax error; it's quite likely to be 'corrected' > to use square brackets "looks like a syntax error"? doesn't that depend on what the syntax is? "corrected" by whom? anyway, the use of '(' and ')' is incidental, the point is to not use '[' and ']' (to make it clear that we're not returning a list). > (B) it reduces the parser's ability to diagnose syntax errors > (C) nothing about it visually suggests the most important piece of > information, which is that the expression is executed for side > effects, not for its value. surely using the [] notation is even worse in that regard. > The idea of explicitly writing "_ = [Expr || Pat <- List]" has none > of those defects. The only compiler change that's called for is to > emit a warning when the value of a list comprehension *isn't* used in > some way (including _= as a use). yes, this would be a nice addition. > As for combining accumulation with list comprehension, > one can already write > lists:foldl(fun (X,Y) -> X+Y end, 0, > [some list comprehension goes here]) > which isn't *that* bad. i think it is that bad :> here's the thing. the list comprehensions were introduced for no good reason, in that they added nothing that you could not already do. but they make my life easier because they offer a much more succinct notation (especially since i have to look at other people's code a lot). i want the same improvement for fold-like list operations. > Any more direct syntax (such as something > based on the Lisp 'do' construct, for example) would have to involve > some way of presenting two bindings for the same names in the same > construct (as 'do' does), which is not a very Erlangish thing to do. being an old FORTRAN programmer i had no idea what the Lisp 'do' construct was. it is indeed (as far as i can tell) exactly what i want. thanks for the educational reference. mats From ulf.wiger@REDACTED Fri Mar 3 13:51:51 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Fri, 3 Mar 2006 13:51:51 +0100 Subject: optimization of list comprehensions Message-ID: Richard A. O'Keefe wrote: > > As for combining accumulation with list comprehension, one > can already write > lists:foldl(fun (X,Y) -> X+Y end, 0, > [some list comprehension goes here]) which > isn't *that* bad. ... and I, for one, sometimes do this, too. Of course, it has the unwanted property that you have to make an extra pass over the list. One could imagine a refactoring optimization that recognizes the use of a list comprehension as input to a map or fold, and rewrites it into something else. What complicates matters somewhat (for us hobby hackers) is that the list comprehension transforms are written in core erlang. Thus: lc1(List) -> _ = [foo(X) || X <- List, X > 17]. gets transformed to: 'lc1'/1 = %% Line 10 fun (_cor0) -> let <_cor6> = %% Line 11 letrec 'lc$^0'/1 = fun (_cor3) -> case _cor3 of <[%% Line 12 X|_cor2]> when %% Line 13 call 'erlang':'>' (X, 17) -> let <_cor4> = apply 'foo'/1 (X) in let <_cor5> = %% Line 12 apply 'lc$^0'/1 (_cor2) in [_cor4|_cor5] ( <[_cor1|_cor2]> when 'true' -> %% Line 12 apply 'lc$^0'/1 (_cor2) -| ['compiler_generated'] ) <[]> when 'true' -> [] ( <_cor3> when 'true' -> primop 'match_fail' ({'function_clause',_cor3}) -| ['compiler_generated'] ) end in apply 'lc$^0'/1 (_cor0) in %% Line 14 'ok' Now, the _ seems to be implied by the fact that _cor0 is not used, but if I understand the above code correctly, it is not quite as obvious at this level that we deliberately ignore the return value of the lc. Perhaps the combination of a parse_transform and a core_transform could do it, where the parse_transform detects [{match, 11, {var,11,'_'}, {lc,11, {call, 11, {atom,11,foo}, [{var,11,'X'}]}, [{generate, 12, {var,12,'X'}, {var,12,'List'}}, {op,13,'>',{var,13,'X'}, {integer,13,17}}]}}, and perhaps inserts a pseudo-function as a wrapper, which is later removed by the core_transform, which re-writes the accumulator expression in the core erlang code for the lc ([_cor4|_cor5])? The core erlang specification can be found here: http://www.it.uu.se/research/publications/reports/2000-030/2000-030-nc.p df The module cerl.erl in the compiler application seems to be a useful utility for those who want to write core erlang transforms. /Ulf W From tobbe@REDACTED Fri Mar 3 13:53:48 2006 From: tobbe@REDACTED (Torbjorn Tornkvist) Date: Fri, 03 Mar 2006 13:53:48 +0100 Subject: optimization of list comprehensions In-Reply-To: <4408120F.4010508@ericsson.com> References: <200603022211.k22MBb8n267351@atlas.otago.ac.nz> <4408120F.4010508@ericsson.com> Message-ID: Mats Cronqvist wrote: > here's the thing. the list comprehensions were introduced for no good > reason, in that they added nothing that you could not already do. Hm...as I remember it; they were introduced because of Mnemosyne. --Tobbe From mats.cronqvist@REDACTED Fri Mar 3 14:02:31 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Fri, 03 Mar 2006 14:02:31 +0100 Subject: optimization of list comprehensions In-Reply-To: References: <200603022211.k22MBb8n267351@atlas.otago.ac.nz> <4408120F.4010508@ericsson.com> Message-ID: <44083E67.2020706@ericsson.com> Torbjorn Tornkvist wrote: > Mats Cronqvist wrote: > >> here's the thing. the list comprehensions were introduced for no >> good reason, in that they added nothing that you could not already do. > > > Hm...as I remember it; they were introduced because of Mnemosyne. in order to do what? i.e. is there anything that cannot be done without list comprehensions? (i've never used mnemosyne). mats From casper2000a@REDACTED Fri Mar 3 14:49:38 2006 From: casper2000a@REDACTED (Eranga Udesh) Date: Fri, 3 Mar 2006 19:49:38 +0600 Subject: early warning - new rdbms In-Reply-To: Message-ID: <20060303135104.5C4B2400032@mail.omnibis.com> Hi, No New RDBMS yet? Also regarding the per process memory issue in Linux/Unix came up in the mailing list lately, how can we deal with a large Erlang Mnesia DB, which may have to hold data, say over 5-6 GB of size (5 million records)? Since mnesia disk_copies DB keeps all the data in memory, it's going to be a problem, isn't it? Regards, - Eranga -----Original Message----- From: Ulf Wiger (AL/EAB) [mailto:ulf.wiger@REDACTED] Sent: Friday, February 24, 2006 3:03 PM To: Eranga Udesh Subject: RE: early warning - new rdbms I'm hoping to be able to upload a snapshot sometime this week. It will not be fully functional, but the integrity checking should work, at least. Regards, Ulf W > -----Original Message----- > From: Eranga Udesh [mailto:casper2000a@REDACTED] > Sent: den 24 februari 2006 04:35 > To: Ulf Wiger (AL/EAB) > Subject: RE: early warning - new rdbms > > Hi, > > Where's the RDBMS system you talk about? Could I get quick > access, since I am designing a system and I prefer to design > it on top of a RDBMS system, instead of a local Mnesia DB. > > Pls advice asap. > > Thanks, > - Eranga > > > > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Ulf > Wiger (AL/EAB) > Sent: Thursday, February 09, 2006 9:15 PM > To: Chaitanya Chalasani > Cc: erlang-questions@REDACTED > Subject: RE: early warning - new rdbms > > > Ok, but I've been sidetracked for a few days. > I'm still doing some cleanups. I will let you all know as > soon as I have something. > > (Again, I was unprepared for so many takers. > I had expected to have to announce it a few times before > anyone took the bait. ;) > > Regards, > Ulf W > > > -----Original Message----- > > From: Chaitanya Chalasani [mailto:chaitanya.chalasani@REDACTED] > > Sent: den 9 februari 2006 13:44 > > To: Ulf Wiger (AL/EAB) > > Cc: erlang-questions@REDACTED > > Subject: Re: early warning - new rdbms > > > > I would like to beta-test as well. > > > > On Wednesday 25 January 2006 14:16, Ulf Wiger (AL/EAB) wrote: > > > I thought I'd let the cat out of the bag a little... > > > > > > If anyone wants to beta-test or help out with some of the more > > > advanced problems, let me know. > > > > > > > > > I've come pretty far along with a new version of rdbms. It > > has several > > > nice features, and I think it's about to make the transition from > > > 'somewhat interesting' to 'actually useful': > > > > > > - JIT compilation of verification code. The overhead > > > for type and bounds checking is now only a few (~5) > > > microseconds per write operation. > > > - The parameterized indexes that I hacked into mnesia > > > before are now part of rdbms. This include ordered > > > indexes and fragmented indexes (i.e. hashed on > > > index value - so they should scale well.) > > > - Rdbms will handle fragmented tables transparently > > > (And actually handles plain tables with less overhead > > > than mnesia_frag does.) The overhead for using the > > > rdbms access module (compared to no access module) > > > on a plain transaction is in the order of 20 > > > microseconds on my 1 GHz SunBLADE. > > > - Rdbms hooks into the transaction handling in such a > > > way that it can automatically rebuild the verification > > > code as soon as a schema transaction commits. > > > - A readable file format for schema definitions, trying > > > to establish a structured way to create large mnesia > > > databases. I've also added a 'group' concept to be > > > able to group tables into corresponding subsystems, > > > since I thought this might be helpful in large > > > systems. > > > > > > > > > I'm planning to release rdbms with OTP R11, since it > requires some > > > changes to mnesia that (hopefully) will make it into R11. R11 is > > > planned for May. > > > > > > Some of the (fairly minor) changes to mnesia so far: > > > > > > - The access module can hook into the transaction > > > flow by providing callbacks for begin_activity() > > > and end_activity(). Rdbms uses this for proper > > > handling of abort and commit triggers as well as > > > loop detection in referential integrity checks. > > > It also allows rdbms to detect schema changes. > > > - An 'on-load' hook allows rdbms to build indexes > > > the first time a table is loaded. > > > - A low-level access API for foreign tables. My > > > first foreign table attempt was a 'disk_log'. > > > It makes it possible to properly log events > > > inside a transaction context. You also get > > > replicated logs almost for free, as well as > > > (if you want to) fragmented logs. (: > > > My next attempt at a foreign table is a read- > > > only file system (doesn't have to be read-only, > > > but I thought I'd start with that.) Thus > > > the experiments with converting regexps to > > > the select pattern syntax. > > > > > > One interesting experiment might be to define an ISAM table > > type for > > > really large ordered sets on disk. Combining it with > rdbms, you can > > > type- specify the data types and then convert to whatever > format is > > > expected by the ISAM files. > > > > > > Some questions to those who might be interested: > > > > > > - I'd like to break compatibility with the old > > > rdbms in some respects. Is this a problem for > > > anyone? (speak now or forever hold your peace) > > > - Do you have any suggestions or feature requests? > > > - Do you want to help out? > > > > > > Regards, > > > Uffe > > > > -- > > Chaitanya Chalasani > > > > > From tobbe@REDACTED Fri Mar 3 15:01:55 2006 From: tobbe@REDACTED (Torbjorn Tornkvist) Date: Fri, 03 Mar 2006 15:01:55 +0100 Subject: optimization of list comprehensions In-Reply-To: <44083E67.2020706@ericsson.com> References: <200603022211.k22MBb8n267351@atlas.otago.ac.nz> <4408120F.4010508@ericsson.com> <44083E67.2020706@ericsson.com> Message-ID: Mats Cronqvist wrote: > > > Torbjorn Tornkvist wrote: > >> Mats Cronqvist wrote: >> >>> here's the thing. the list comprehensions were introduced for no >>> good reason, in that they added nothing that you could not already do. >> >> >> >> Hm...as I remember it; they were introduced because of Mnemosyne. > > > in order to do what? i.e. is there anything that cannot be done > without list comprehensions? (i've never used mnemosyne). > > mats Hans Nilsson can probably elaborate on this quite a bit. Mnemosyne added a way of doing Mnesia queries using a syntax very close to list comprehensions, so close in fact that they were added as a general language construct. At least that is how I remember it... --Tobbe From ulf.wiger@REDACTED Fri Mar 3 15:15:48 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Fri, 3 Mar 2006 15:15:48 +0100 Subject: early warning - new rdbms Message-ID: I can upload a new snapshot. I'll put it into jungerl pretty soon. 32-bit Erlang can only address 4 GB. You can either run 64-bit Erlang in order to address more, or run multiple Erlang nodes. Very large tables can be managed with fragmentation. For 64-bit erlang, it's important to remember that the word size is doubled, which means that many data structures will occupy roughly twice as much memory as in the 32-bit version. Binaries work well, whereas strings are a veritable disaster (128 bits per character.) The recent addition of input/output filters to rdbms might help those who store large strings in mnesia and want to use 64-bit erlang (how large a community is that, I wonder?) /Ulf W > -----Original Message----- > From: Eranga Udesh [mailto:casper2000a@REDACTED] > Sent: den 3 mars 2006 14:50 > To: erlang-questions@REDACTED > Cc: Ulf Wiger (AL/EAB) > Subject: RE: early warning - new rdbms > > Hi, > > No New RDBMS yet? > > Also regarding the per process memory issue in Linux/Unix > came up in the mailing list lately, how can we deal with a > large Erlang Mnesia DB, which may have to hold data, say over > 5-6 GB of size (5 million records)? Since mnesia disk_copies > DB keeps all the data in memory, it's going to be a problem, isn't it? > > Regards, > - Eranga > > > > -----Original Message----- > From: Ulf Wiger (AL/EAB) [mailto:ulf.wiger@REDACTED] > Sent: Friday, February 24, 2006 3:03 PM > To: Eranga Udesh > Subject: RE: early warning - new rdbms > > > I'm hoping to be able to upload a snapshot > sometime this week. It will not be fully > functional, but the integrity checking > should work, at least. > > Regards, > Ulf W > > > -----Original Message----- > > From: Eranga Udesh [mailto:casper2000a@REDACTED] > > Sent: den 24 februari 2006 04:35 > > To: Ulf Wiger (AL/EAB) > > Subject: RE: early warning - new rdbms > > > > Hi, > > > > Where's the RDBMS system you talk about? Could I get quick > > access, since I am designing a system and I prefer to design > > it on top of a RDBMS system, instead of a local Mnesia DB. > > > > Pls advice asap. > > > > Thanks, > > - Eranga > > > > > > > > -----Original Message----- > > From: owner-erlang-questions@REDACTED > > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Ulf > > Wiger (AL/EAB) > > Sent: Thursday, February 09, 2006 9:15 PM > > To: Chaitanya Chalasani > > Cc: erlang-questions@REDACTED > > Subject: RE: early warning - new rdbms > > > > > > Ok, but I've been sidetracked for a few days. > > I'm still doing some cleanups. I will let you all know as > > soon as I have something. > > > > (Again, I was unprepared for so many takers. > > I had expected to have to announce it a few times before > > anyone took the bait. ;) > > > > Regards, > > Ulf W > > > > > -----Original Message----- > > > From: Chaitanya Chalasani [mailto:chaitanya.chalasani@REDACTED] > > > Sent: den 9 februari 2006 13:44 > > > To: Ulf Wiger (AL/EAB) > > > Cc: erlang-questions@REDACTED > > > Subject: Re: early warning - new rdbms > > > > > > I would like to beta-test as well. > > > > > > On Wednesday 25 January 2006 14:16, Ulf Wiger (AL/EAB) wrote: > > > > I thought I'd let the cat out of the bag a little... > > > > > > > > If anyone wants to beta-test or help out with some of the more > > > > advanced problems, let me know. > > > > > > > > > > > > I've come pretty far along with a new version of rdbms. It > > > has several > > > > nice features, and I think it's about to make the > transition from > > > > 'somewhat interesting' to 'actually useful': > > > > > > > > - JIT compilation of verification code. The overhead > > > > for type and bounds checking is now only a few (~5) > > > > microseconds per write operation. > > > > - The parameterized indexes that I hacked into mnesia > > > > before are now part of rdbms. This include ordered > > > > indexes and fragmented indexes (i.e. hashed on > > > > index value - so they should scale well.) > > > > - Rdbms will handle fragmented tables transparently > > > > (And actually handles plain tables with less overhead > > > > than mnesia_frag does.) The overhead for using the > > > > rdbms access module (compared to no access module) > > > > on a plain transaction is in the order of 20 > > > > microseconds on my 1 GHz SunBLADE. > > > > - Rdbms hooks into the transaction handling in such a > > > > way that it can automatically rebuild the verification > > > > code as soon as a schema transaction commits. > > > > - A readable file format for schema definitions, trying > > > > to establish a structured way to create large mnesia > > > > databases. I've also added a 'group' concept to be > > > > able to group tables into corresponding subsystems, > > > > since I thought this might be helpful in large > > > > systems. > > > > > > > > > > > > I'm planning to release rdbms with OTP R11, since it > > requires some > > > > changes to mnesia that (hopefully) will make it into > R11. R11 is > > > > planned for May. > > > > > > > > Some of the (fairly minor) changes to mnesia so far: > > > > > > > > - The access module can hook into the transaction > > > > flow by providing callbacks for begin_activity() > > > > and end_activity(). Rdbms uses this for proper > > > > handling of abort and commit triggers as well as > > > > loop detection in referential integrity checks. > > > > It also allows rdbms to detect schema changes. > > > > - An 'on-load' hook allows rdbms to build indexes > > > > the first time a table is loaded. > > > > - A low-level access API for foreign tables. My > > > > first foreign table attempt was a 'disk_log'. > > > > It makes it possible to properly log events > > > > inside a transaction context. You also get > > > > replicated logs almost for free, as well as > > > > (if you want to) fragmented logs. (: > > > > My next attempt at a foreign table is a read- > > > > only file system (doesn't have to be read-only, > > > > but I thought I'd start with that.) Thus > > > > the experiments with converting regexps to > > > > the select pattern syntax. > > > > > > > > One interesting experiment might be to define an ISAM table > > > type for > > > > really large ordered sets on disk. Combining it with > > rdbms, you can > > > > type- specify the data types and then convert to whatever > > format is > > > > expected by the ISAM files. > > > > > > > > Some questions to those who might be interested: > > > > > > > > - I'd like to break compatibility with the old > > > > rdbms in some respects. Is this a problem for > > > > anyone? (speak now or forever hold your peace) > > > > - Do you have any suggestions or feature requests? > > > > - Do you want to help out? > > > > > > > > Regards, > > > > Uffe > > > > > > -- > > > Chaitanya Chalasani > > > > > > > > > > > > From casper2000a@REDACTED Fri Mar 3 15:17:56 2006 From: casper2000a@REDACTED (Eranga Udesh) Date: Fri, 3 Mar 2006 20:17:56 +0600 Subject: Optimizing Erlang In-Reply-To: Message-ID: <20060303141945.56CC8400028@mail.omnibis.com> Hi, I have a couple of questions regarding Erlang CPU/memory Optimization. 1. Due to the single assignment restriction in Erlang, when changing values in a record as below. Eg: NewRecord = OldRecord#record_type{attribute = Value}. does Erlang take a full copy of the record to change only the attribute called "attribute"? If so, isn't that a waste of full memory copying to just change one attribute and if the old copy is never used? 2. In gen_server handle_call/handle_cast/etc, does the State parameter makes a new copy in each call/cast/etc? Or changes the same State? 3. When passing large lists/binary/etc to a function, are they going as references or values (a full copy of data)? 4. Is there any overhead difference between writing functions in the form of, fun(A, B, C) or fun([A,B,C]) or fun({A,B,B})? 5. Is there any overhead difference between if and case expressions? 6. Is it Ober or jInterface better to write Java based GUIs for Erlang Backends? Any other better language for this? GS doesn't have much support in designing good interfaces. 7. In a high capacity system, is it better to run the Mnesia DB in the same Erlang instance (no data transfer between processes) or have it as a RDBMS in a separate Erlang instance, so that in a multi-CPU environment can gives a higher capacity? 8. Does Mnesia bag type database does an exhaustive search when called mnesia_match_object/3? Thanks, - Eranga -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulf.wiger@REDACTED Fri Mar 3 15:34:17 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Fri, 3 Mar 2006 15:34:17 +0100 Subject: Optimizing Erlang Message-ID: 1. When a tuple element is changed, a new copy of the tuple is created, but keep in mind that it is just the pointers to the elements that are copied. You have to make pretty large tuples (say, over 100 elements) in order for this to be noticable. 2. The State parameter is copied any time the callback module does something to it, but the gen_server module doesn't ever modify it. 3. Large binaries are passed by reference. Lists are always copied. 4. There will be overhead, but it is hard to measure. 5. No 6. I think this is a matter of taste. There is a WxWidgets library for erlang, and Mats Cronquist's gtkNode. Joe Armstrong has also done some nice things with tcl (but hasn't released it yet.) Joe is also doing wonderful stuff with JavaScript & Erlang (also not released yet.) Other people have used Delphi to design the front-end, and then erlang for the backend. 7. I don't know if there's a universal answer to this one, except that it depends on the particular application and the access patterns. If you _would_ want to separate the DB and application, you may want to create a high-level database API customized to your application, where you can perhaps group dabase operations together. Otherwise, the performance penalty of communicating may eat up the advantages of having two processors. 8. Not if the key is bound in your pattern. It will to a linear search through the set of objects matching the key. If the key is unbound, it will be a linear search no matter what table type you're using. Regards, Ulf W ________________________________ From: owner-erlang-questions@REDACTED [mailto:owner-erlang-questions@REDACTED] On Behalf Of Eranga Udesh Sent: den 3 mars 2006 15:18 To: erlang-questions@REDACTED Subject: Optimizing Erlang Hi, I have a couple of questions regarding Erlang CPU/memory Optimization. 1. Due to the single assignment restriction in Erlang, when changing values in a record as below. Eg: NewRecord = OldRecord#record_type{attribute = Value}. does Erlang take a full copy of the record to change only the attribute called "attribute"? If so, isn't that a waste of full memory copying to just change one attribute and if the old copy is never used? 2. In gen_server handle_call/handle_cast/etc, does the State parameter makes a new copy in each call/cast/etc? Or changes the same State? 3. When passing large lists/binary/etc to a function, are they going as references or values (a full copy of data)? 4. Is there any overhead difference between writing functions in the form of, fun(A, B, C) or fun([A,B,C]) or fun({A,B,B})? 5. Is there any overhead difference between if and case expressions? 6. Is it Ober or jInterface better to write Java based GUIs for Erlang Backends? Any other better language for this? GS doesn't have much support in designing good interfaces. 7. In a high capacity system, is it better to run the Mnesia DB in the same Erlang instance (no data transfer between processes) or have it as a RDBMS in a separate Erlang instance, so that in a multi-CPU environment can gives a higher capacity? 8. Does Mnesia bag type database does an exhaustive search when called mnesia_match_object/3? Thanks, - Eranga -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgsfak@REDACTED Fri Mar 3 15:50:04 2006 From: sgsfak@REDACTED (Stelianos G. Sfakianakis) Date: Fri, 3 Mar 2006 16:50:04 +0200 Subject: Erlang and CORBA Message-ID: Hi! I 've started to investigate the use of Erlang/Yaws for some work of mine and I am currently puzzled about the corba *client* support. To be more specific I have a CORBA server written using TAO and I want to communicate with it from Erlang. Questions: 1. It seems that I have to start orber and according to the docs I did this: mnesia:start(). corba:orb_init([{domain, "testORB"}, {orber_debug_level, 10}]). orber:install([node()], [{ifr_storage_type, ram_copies}]). orber:start(). But by doing so a new listening TCP port (4001) is opened. Since I want to just use the client side of corba this seems unnecessary. Can I use corba without opening a (server) port? 2. I used the corba:string_to_object/1 function and got an object reference from a corbaloc string. Then I called a simple method and saw that the server responded. The problem is that when the erlang tries to un marshall the response throws an error like this [1085] cdr_decode:ifrid_to_name("IDL:Test/TestSt:1.0"). IFR Id not found: [] Any ideas about what is wrong? Should I do anything to populate the IFR with the my (user defined) corba types? Thanks! Stelios From per.gustafsson@REDACTED Fri Mar 3 15:50:20 2006 From: per.gustafsson@REDACTED (Per Gustafsson) Date: Fri, 03 Mar 2006 15:50:20 +0100 Subject: Optimizing Erlang In-Reply-To: References: Message-ID: <440857AC.4070601@it.uu.se> >> >> 3. When passing large lists/binary/etc to a function, are >> they going as references or values (a full copy of data)? > 3. Large binaries are passed by reference. Lists are always copied. > To clarify, for function calls arguments are always passed by reference. When a term is sent to another process however, everything but large binaries is copied. Per From mats.cronqvist@REDACTED Fri Mar 3 16:11:11 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Fri, 03 Mar 2006 16:11:11 +0100 Subject: Optimizing Erlang In-Reply-To: References: Message-ID: <44085C8F.2050308@ericsson.com> some random ruminations... gtkNode is pretty damn good (in my unbiased opinion), if you have GTK2 installed. you can use Glade to design the GUI and normal erlang distribution to communicate with it. firefox 1.5, javascript and erlang seems like The Future (tm). tcl, and hence gs, is too old to be any good. ulf spelt my name wrong. mats Ulf Wiger (AL/EAB) wrote: > [...] > > 6. I think this is a matter of taste. There is a WxWidgets library > for erlang, and Mats Cronquist's gtkNode. Joe Armstrong has also > done some nice things with tcl (but hasn't released it yet.) Joe is > also doing wonderful stuff with JavaScript & Erlang (also not > released yet.) Other people have used Delphi to design the > front-end, and then erlang for the backend. From nick@REDACTED Fri Mar 3 16:19:41 2006 From: nick@REDACTED (Niclas Eklund) Date: Fri, 3 Mar 2006 16:19:41 +0100 (MET) Subject: Erlang and CORBA In-Reply-To: Message-ID: Hello! #1 Currently it is not possible to run a pure client. If you intend to run multiple instances you can use iiop_port == 0 (then the OS will supply a vacant port). #2 I guess that you have not registered the interface in the IFR. When you compile your IDL-specification, a module named oe_.erl, which exports a function called oe_register: http://www.erlang.org/doc/doc-5.4.12/lib/orber-3.6.2/doc/html/ch_debugging.html#14 Orber must be started before registration. mnesia:start(). corba:orb_init([{domain, "testORB"}, {orber_debug_level, 10}]). orber:install([node()], [{ifr_storage_type, ram_copies}]). orber:start(). oe_:oe_register(). Note, you can use a minimal version of the IFR by adding {flags,16#80} when invoking corba:orb_init/1. You should also add the compile option {light_ifr, true} when compiling your IDL-specification (see the IC documentation). For more information about configuring Orber see: http://www.erlang.org/doc/doc-5.4.12/lib/orber-3.6.2/doc/html/ch_install.html#5.2 Best regards, Niclas On Fri, 3 Mar 2006, Stelianos G. Sfakianakis wrote: > Hi! > > I 've started to investigate the use of Erlang/Yaws for some work of > mine and I am currently puzzled about the corba *client* support. To > be more specific I have a CORBA server written using TAO and I want to > communicate with it from Erlang. Questions: > > 1. It seems that I have to start orber and according to the docs I did this: > > mnesia:start(). > corba:orb_init([{domain, "testORB"}, {orber_debug_level, 10}]). > orber:install([node()], [{ifr_storage_type, ram_copies}]). > orber:start(). > > But by doing so a new listening TCP port (4001) is opened. Since I > want to just use the client side of corba this seems unnecessary. Can > I use corba without opening a (server) port? > > 2. I used the corba:string_to_object/1 function and got an object > reference from a corbaloc string. Then I called a simple method and > saw that the server responded. The problem is that when the erlang > tries to un marshall the response throws an error like this > > [1085] cdr_decode:ifrid_to_name("IDL:Test/TestSt:1.0"). IFR Id not found: [] > > Any ideas about what is wrong? Should I do anything to populate the > IFR with the my (user defined) corba types? > > Thanks! > Stelios From ulf.wiger@REDACTED Fri Mar 3 16:24:59 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Fri, 3 Mar 2006 16:24:59 +0100 Subject: Optimizing Erlang Message-ID: Of course. My bad. I read the question too quickly. My reply assumed message passing only - not passing arguments to a function. /U > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Per Gustafsson > Sent: den 3 mars 2006 15:50 > To: erlang-questions@REDACTED > Subject: Re: Optimizing Erlang > > > >> > >> 3. When passing large lists/binary/etc to a function, are > >> they going as references or values (a full copy of data)? > > > 3. Large binaries are passed by reference. Lists are always copied. > > > > To clarify, for function calls arguments are always passed by > reference. > When a term is sent to another process however, everything > but large binaries is copied. > > Per > > From ulf.wiger@REDACTED Fri Mar 3 16:30:31 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Fri, 3 Mar 2006 16:30:31 +0100 Subject: Optimizing Erlang Message-ID: > ulf spelt my name wrong. I'm not doing so well today, am I? But still, we've only known each other for seven years or so... I need at least 10 years to learn how to spell something. /U PS On the topic of spelling - "spelled" is the proper past tense. A "spelt" is a split piece of wood. ;-) From mats.cronqvist@REDACTED Fri Mar 3 16:34:51 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Fri, 03 Mar 2006 16:34:51 +0100 Subject: Optimizing Erlang In-Reply-To: References: Message-ID: <4408621B.603@ericsson.com> i write everything in swedish and translate with babelfish. mats http://www.thefreedictionary.com/dict.asp?Word=spelt spelt (v.) A past tense and a past participle of spell Ulf Wiger (AL/EAB) wrote: > > >> ulf spelt my name wrong. > > > I'm not doing so well today, am I? > > But still, we've only known each other for > seven years or so... I need at least 10 years > to learn how to spell something. > > /U > > PS On the topic of spelling - "spelled" is the > proper past tense. A "spelt" is a split piece of > wood. ;-) From peter.c.marks@REDACTED Fri Mar 3 16:48:32 2006 From: peter.c.marks@REDACTED (Peter Marks) Date: Fri, 3 Mar 2006 10:48:32 -0500 Subject: Literate Programming Message-ID: <239d02ca0603030748x2c472de1yec422aa60a76c44@mail.gmail.com> This article at LTU talks about a new Literate Programming (Knuth) website http://lambda-the-ultimate.org/node/1336 I went there and they have a category for Erlang, but no programs yet! Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From chandrashekhar.mullaparthi@REDACTED Fri Mar 3 18:32:38 2006 From: chandrashekhar.mullaparthi@REDACTED (chandru) Date: Fri, 3 Mar 2006 17:32:38 +0000 Subject: Literate Programming In-Reply-To: <239d02ca0603030748x2c472de1yec422aa60a76c44@mail.gmail.com> References: <239d02ca0603030748x2c472de1yec422aa60a76c44@mail.gmail.com> Message-ID: On 03/03/06, Peter Marks wrote: > > This article at LTU talks about a new Literate Programming (Knuth) website > > http://lambda-the-ultimate.org/node/1336 > > I went there and they have a category for Erlang, but no programs yet! > There is one now. I've added an implementation of insertion sort. But I created a subcategory in the process instead of adding an article. If anyone knows how to fix it, please do. cheers Chandru From kruegger@REDACTED Sat Mar 4 00:16:52 2006 From: kruegger@REDACTED (Stephen Han) Date: Fri, 3 Mar 2006 15:16:52 -0800 Subject: Literate Programming In-Reply-To: References: <239d02ca0603030748x2c472de1yec422aa60a76c44@mail.gmail.com> Message-ID: <86f1f5350603031516s22409250m6262d1afcb817a5e@mail.gmail.com> Hi... I briefly looked at the wiki pedia insertion sort algorithm (honestly, it was pseudo code :-P ) and I think this code also works, too. Only difference is that I compare value from the beginning not from the end in the inner loop. -module(is). -export([insert_sort/1]). insert(Value, [H|T]) when Value >= H -> [H|insert(Value, T)]; insert(Value, T) -> [Value|T]. insert_sort(A) -> insert_sort(A, []). insert_sort([], Acc) -> Acc; insert_sort([Value|T], Acc) -> insert_sort(T, insert(Value, Acc)). regards, On 3/3/06, chandru wrote: > > On 03/03/06, Peter Marks wrote: > > > > This article at LTU talks about a new Literate Programming (Knuth) > website > > > > http://lambda-the-ultimate.org/node/1336 > > > > I went there and they have a category for Erlang, but no programs yet! > > > > There is one now. I've added an implementation of insertion sort. But > I created a subcategory in the process instead of adding an article. > If anyone knows how to fix it, please do. > > cheers > Chandru > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ke.han@REDACTED Sat Mar 4 05:53:48 2006 From: ke.han@REDACTED (ke han) Date: Sat, 4 Mar 2006 12:53:48 +0800 Subject: early warning - new rdbms In-Reply-To: References: Message-ID: On Mar 3, 2006, at 10:15 PM, Ulf Wiger ((AL/EAB)) wrote: > to rdbms might help those who store large > strings in mnesia and want to use 64-bit erlang > (how large a community is that, I wonder?) > > /Ulf W I am developing traditional user-centric enterprise apps with web UI. In my current app, about 80% of the db attributes are strings. My next app I'm starting on very soon will have similar characteristics. The issues of string size in erlang is a serious looming issue for me. I'm going to have to figure out something pretty soon. I keep deferring the issue hoping someone will publish an easy to use utf-8 string library ;-) any ideas? ke han From pupeno@REDACTED Sat Mar 4 07:05:29 2006 From: pupeno@REDACTED (Pupeno) Date: Sat, 4 Mar 2006 03:05:29 -0300 Subject: Calling a function from command line Message-ID: <200603040305.33891.pupeno@pupeno.com> Hello, I am making a kind of script that generates documentation using edoc. I need to call edoc:files(["my", "list", "of", "files"], [{so, "me"}, {opt, "ions"}]) but that can't be called from the shell itself, so, is there a better way to do it than: echo "edoc:files([\"my\", \"list\", \"of\", \"files\"], [{so, \"me\"}, {opt, \"ions\"}])" | erl ? Thanks. -- Pupeno (http://pupeno.com) Vendemos: Camara de fotos rusa ????? ET (ZENIT) con flash ?????: http://pupeno.com/vendo/#camara -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From erlang@REDACTED Sat Mar 4 08:00:33 2006 From: erlang@REDACTED (Michael McDaniel) Date: Fri, 3 Mar 2006 23:00:33 -0800 Subject: Calling a function from command line In-Reply-To: <200603040305.33891.pupeno@pupeno.com> References: <200603040305.33891.pupeno@pupeno.com> Message-ID: <20060304070033.GQ19585@delora.autosys.us> On Sat, Mar 04, 2006 at 03:05:29AM -0300, Pupeno wrote: > Hello, > I am making a kind of script that generates documentation using edoc. > I need to call edoc:files(["my", "list", "of", "files"], [{so, "me"}, {opt, > "ions"}]) but that can't be called from the shell itself, so, is there a > better way to do it than: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ using the bash shell $ export SRC="a.erl b.erl ... n.erl" $ erl -run edoc files userguide ${SRC} -run erlang halt $ cat userguide % @doc % My Whaterver User Guide. % %
 %
 % blah blah blah, and config files are a.conf, b.conf
 %
 %
 %
% % @end -module(userguide). ~Michael From ulf@REDACTED Sat Mar 4 09:30:38 2006 From: ulf@REDACTED (Ulf Wiger) Date: Sat, 04 Mar 2006 09:30:38 +0100 Subject: early warning - new rdbms In-Reply-To: References: Message-ID: Den 2006-03-04 05:53:48 skrev ke han : > I keep deferring the issue hoping someone will publish an easy to use > utf-8 string library any ideas? How about jungerl/lib/ucs? /Ulf W -- Ulf Wiger From ulf.wiger@REDACTED Sat Mar 4 16:46:23 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Sat, 4 Mar 2006 16:46:23 +0100 Subject: early warning - new rdbms Message-ID: Eranga Udesh wrote: > > No New RDBMS yet? I finally got around to checking in the new code in CVS (jungerl). I messed it up on the first attempt, but hopefully, all is ok now. It's not visible through the web interface yet. Let me know if it doesn't work. =================================================== Remember that you have to use the mnesia patches AND recompile all the other mnesia modules as well (using the mnesia.hrl in rdbms)!!!! =================================================== I've added a {unique,bool()} attribute on my indexes, and there's a test suite to "verify" that it works. I've also added test suites to check that the indexing works not just for adds, but for update and delete as well. I still need to check fragmented indexes, and perform a slew of robustness tests. I'm going to be pretty busy with other things for a while, but I welcome any feedback from the rest of you, including help with the documentation. (: There are also some issues with the make files in jungerl. It didn't build automatically when I did 'make' at the top level. I haven't even tried to investigate. Any help with the make files would be most appreciated. The rdbms.app should be built automatically, but isn't today (you _can_ build it with 'builder', though.) I'll answer questions and fix bugs as fast as I can. Regards, Ulf W From richardc@REDACTED Sun Mar 5 10:51:19 2006 From: richardc@REDACTED (Richard Carlsson) Date: Sun, 05 Mar 2006 10:51:19 +0100 Subject: Calling a function from command line In-Reply-To: <200603040305.33891.pupeno@pupeno.com> References: <200603040305.33891.pupeno@pupeno.com> Message-ID: <440AB497.9020604@it.uu.se> Pupeno wrote: > Hello, > I am making a kind of script that generates documentation using edoc. > I need to call edoc:files(["my", "list", "of", "files"], [{so, "me"}, {opt, > "ions"}]) but that can't be called from the shell itself, so, is there a > better way to do it than: > > echo "edoc:files([\"my\", \"list\", \"of\", \"files\"], [{so, \"me\"}, {opt, > \"ions\"}])" | erl Try this: erl -noshell -run edoc_run files '["my","list","of","files"]' \ '[{opt, "ions"}]' -s init stop But if you have organized your files as an application, i suggest: erl -noshell -run edoc_run application $APP '[{opt, "ions"}]' \ -s init stop Check the Makefile for edoc itself for a concrete usage example. /Richard From ulf.wiger@REDACTED Sun Mar 5 18:23:02 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Sun, 5 Mar 2006 18:23:02 +0100 Subject: bootstrapping rdbms Message-ID: I'm not having any luck getting in touch with sourceforge at the moment. Don't know if it's a problem on my end, or at sourceforge. Anyway, I thought I'd check in the following addition in rdbms.erl. The idea was that you could try out rdbms easier if you didn't have to recompile the original mnesia modules. As long as you compile the rdbms/mnesia-patches/src and start erlang with -pa $rdbms/mnesia-patches/ebin (and, of course, -pa $rdbms/ebin), then you can call rdbms:patch_mnesia() before calling mnesia:start([{access_module, rdbms}]). The patch_mnesia() function will recompile the mnesia code in memory, not touching the source on disk. It adds one attribute to the cstruct record, so you won't be able to use an existing database without converting the data first (no code provided for that.) You _can_ create new databases, though. Obviously, the patch_mnesia() function will have to be called again, any time the node is restarted. I'll check it into cvs whenever I can establish contact again. /Ulf W patch_mnesia() -> application:load(mnesia), {ok,Ms} = application:get_key(mnesia, modules), ToPatch = Ms -- [mnesia, mnesia_controller, mnesia_frag, mnesia_lib, mnesia_loader, mnesia_log, mnesia_schema, mnesia_tm], OrigDir = filename:join(code:lib_dir(mnesia), "ebin"), lists:foreach( fun(M) -> F = filename:join( OrigDir,atom_to_list(M) ++ code:objfile_extension()), {ok,{_M,[{abstract_code,{raw_abstract_v1,Forms}}]}} = beam_lib:chunks(F, [abstract_code]), [_|TailF] = Forms, io:format("Transforming ~p ... ", [M]), NewTailF = transform_mod(TailF), {ok, Module, Bin} = compile:forms(NewTailF, []), io:format("ok.~n", []), case code:load_binary(Module, foo, Bin) of {module, Module} -> ok; Error -> erlang:error({Error,Module}) end end, ToPatch). transform_mod(Fs) -> lists:map(fun({attribute,L,record,{cstruct,Flds}}) -> {attribute,L,record,{cstruct, insert_attr(Flds)}}; (X) -> X end, Fs). insert_attr([{record_field,L,{atom,_,load_order},_} = H|T]) -> [{record_field,L,{atom,L,external_copies}, {nil,L}},H|T]; insert_attr([H|T]) -> [H|insert_attr(T)]; insert_attr([]) -> []. From ernie.makris@REDACTED Sun Mar 5 19:25:44 2006 From: ernie.makris@REDACTED (Ernie Makris) Date: Sun, 05 Mar 2006 13:25:44 -0500 Subject: Calling a function from command line In-Reply-To: <200603040305.33891.pupeno@pupeno.com> References: <200603040305.33891.pupeno@pupeno.com> Message-ID: <440B2D28.6070404@comcast.net> Hi Pupeno, Please take a look erl_call utility that I love. It may not be the best fit for command line, but try it out. http://www.erlang.org/doc/doc-5.4.12/lib/erl_interface-3.5.4/doc/html/erl_call.html Thanks Ernie Pupeno wrote: > Hello, > I am making a kind of script that generates documentation using edoc. > I need to call edoc:files(["my", "list", "of", "files"], [{so, "me"}, {opt, > "ions"}]) but that can't be called from the shell itself, so, is there a > better way to do it than: > > echo "edoc:files([\"my\", \"list\", \"of\", \"files\"], [{so, \"me\"}, {opt, > \"ions\"}])" | erl > > ? > > Thanks. > From mickael.remond@REDACTED Sun Mar 5 19:37:46 2006 From: mickael.remond@REDACTED (Mickael Remond) Date: Sun, 5 Mar 2006 19:37:46 +0100 Subject: Erlang Runtime and packaging system Message-ID: <20060305183746.GB20742@memphis.process-one.net> Hello, As promised, we have published several new tools for packaging and distributing Erlang applications. Erlrt is a proof of concept, which does not yet handle dependancies and versioning. This however demonstrate a simple way to install and manage Erlang/OTP distribution. The distribution site is located on: http://erlrt.process-one.net/ After having deployed an erlrt base for your system (archive is 1.5Mb), you can decide which OTP library you would like to install. Here is an example Erlrt session: To start Erlang, go to the directory where archives have been unpacked. $ cd erlrt $ ./start.sh It is possible to upgrade your Erlang installation using the command prompt. * list available packages > erlrt:available(). * list installed packages > erlrt:installed(). * install a package > erlrt:install("mnesia"). * uninstall a package > erlrt:uninstall("mnesia"). Note that no build step are needed on the client machine (unlike erlmerge). The build is made and maintained for the central repository of code. The applications build and packaged in REPOS (Yaws, ejabberd, Wings3d, etc.) will be added to the Erlrt repository. No need to create specific packaging for each platform. Please note that for now, the Windows archive is not complete and it will not work. This will be fixed in the next days. We have also updated the following software: - New Erlang REPOS release: 1.4 beta 2 (Now based on R10B-9 for Linux, Windows and MacOSX). Erlang REPOS is the basis for the central archive repository. See: http://support.process-one.net/doc/display/CONTRIBS/Erlang+REPOS - Erlang R10B-8 for Zaurus archive has been uploaded to: http://www.erlang-projects.org/Public/rpmdeb/erlang_for_zaurus/view Comments and suggestions are very welcome. -- Micka?l R?mond http://www.process-one.net/ From ok@REDACTED Mon Mar 6 02:35:46 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Mon, 6 Mar 2006 14:35:46 +1300 (NZDT) Subject: optimization of list comprehensions Message-ID: <200603060135.k261ZkNu288015@atlas.otago.ac.nz> I wrote: > I am not at all happy about (Expr || Pat <- List) because > (A) it looks like a syntax error; it's quite likely to be 'corrected' > to use square brackets Mats Cronqvist replied: "looks like a syntax error"? doesn't that depend on what the syntax is? No. It depends on what people *THINK* it is. The term was "looks like", not "is". "corrected" by whom? People, what else? Maintenance programmers often fix things that aren't really broken (sometimes until it _is_ broken; the KMP pattern matcher is a good example.) They are especially likely to fix things that LOOK broken. anyway, the use of '(' and ')' is incidental, the point is to not use '[' and ']' (to make it clear that we're not returning a list). But with everything else so very strongly resembling the syntax for a list comprehension, the use of round parentheses is NOT enough to make it CLEAR that we are not returning a list. If you really want to be clear, make it (do Expr for Pat <- List) with the trivial case of iterating over no lists being (do Expr) surely using the [] notation is even worse in that regard. A headache may not be as bad as a toothache; but if I have a toothache it doesn't help me much to give me a headache as well. > As for combining accumulation with list comprehension, > one can already write > lists:foldl(fun (X,Y) -> X+Y end, 0, > [some list comprehension goes here]) > which isn't *that* bad. i think it is that bad :> here's the thing. the list comprehensions were introduced for no good reason, in that they added nothing that you could not already do. but they make my life easier because they offer a much more succinct notation (especially since i have to look at other people's code a lot). I think you just said that making your life easier is not a good reason. One really important thing here is that there was *precedent* for list comprehensions. By the time Erlang got them, list comprehension was a well established notation (up to choice of punctuation marks) which had proven itself in several different languages. (Even Prolog all-solutions" operations are related, and, by the way, always construct lists.) If you except Interlisp and Common Lisp, there *isn't* any precedent for a comprhenion-cum-fold. So the risk of doing it badly is high; it's not just a matter of adopting an already-existing notation. And the evidence so far bears this out: the actual concrete proposals in this thread are quite astonishingly BAD. i want the same improvement for fold-like list operations. > Any more direct syntax (such as something > based on the Lisp 'do' construct, for example) would have to involve > some way of presenting two bindings for the same names in the same > construct (as 'do' does), which is not a very Erlangish thing to do. being an old FORTRAN programmer i had no idea what the Lisp 'do' construct was. it is indeed (as far as i can tell) exactly what i want. thanks for the educational reference. Here's the translation: (DO ((var-1 init-1 step-1) ... (var-n init-n step-n)) (end-test result-expr) (stmt-1) ... (stmt-k)) = (LETREC ((dummy (LAMBDA (var-1 ... var-n) (IF end-test result-expr (BEGIN (stmt-1) ... (stmt-k) (dummy step-1 ... step-n)))))) (dummy init-1 ... init-n)) In words: Bind variables var-1 to var-n to the values of the initial expressions init-1 to init-n. While the end-test is not true, perform the statements in the body (a "pure" use of DO has no statements) evaluate the step expressions step-1 to step-n and rebind var-1 to var-n to their values Finally return the value of result-expr. There is no actual mutation here, just rebinding. The translation makes that explicit by constructing a recursive function with the iteration being tail recursion. Just scribbling down syntax ad hoc, one might do something like this in Erlang as let Pattern1 = Expr1 [then Expr2 for Pattern2 <- Expr3] in Expr4 So we might do let X = 0 then X+Y for {'fred',Y} <- Some_List in X with the translation being something like (fun(Pattern1) -> Expr4 end)(lists:foldl( fun(Pattern1,Pattern2) -> Expr2 end, Expr1, [Pattern2 || Pattern2 <- Expr3])) From casper2000a@REDACTED Mon Mar 6 04:28:44 2006 From: casper2000a@REDACTED (Eranga Udesh) Date: Mon, 6 Mar 2006 09:28:44 +0600 Subject: early warning - new rdbms In-Reply-To: Message-ID: <20060306033015.407FF40008F@mail.omnibis.com> Hi Ulf, This is excellent news, thanks. But unfortunately it doesn't even come when I tried to do CVS download. I downloaded all the jungerl packages by using "." instead of a module name. Please check if the check-in is correct? What's the module name you entered for this project? Thanks, - Eranga -----Original Message----- From: Ulf Wiger (AL/EAB) [mailto:ulf.wiger@REDACTED] Sent: Saturday, March 04, 2006 9:46 PM To: Eranga Udesh; erlang-questions@REDACTED Subject: RE: early warning - new rdbms Eranga Udesh wrote: > > No New RDBMS yet? I finally got around to checking in the new code in CVS (jungerl). I messed it up on the first attempt, but hopefully, all is ok now. It's not visible through the web interface yet. Let me know if it doesn't work. =================================================== Remember that you have to use the mnesia patches AND recompile all the other mnesia modules as well (using the mnesia.hrl in rdbms)!!!! =================================================== I've added a {unique,bool()} attribute on my indexes, and there's a test suite to "verify" that it works. I've also added test suites to check that the indexing works not just for adds, but for update and delete as well. I still need to check fragmented indexes, and perform a slew of robustness tests. I'm going to be pretty busy with other things for a while, but I welcome any feedback from the rest of you, including help with the documentation. (: There are also some issues with the make files in jungerl. It didn't build automatically when I did 'make' at the top level. I haven't even tried to investigate. Any help with the make files would be most appreciated. The rdbms.app should be built automatically, but isn't today (you _can_ build it with 'builder', though.) I'll answer questions and fix bugs as fast as I can. Regards, Ulf W From vipin@REDACTED Mon Mar 6 08:18:38 2006 From: vipin@REDACTED (vipin) Date: Mon, 06 Mar 2006 12:48:38 +0530 Subject: Understanding Erlang based behaviour. Message-ID: <440BE24E.2090103@picopeta.com> Hi, In order to understand the behaviour of an Erlang server, we deviced several experiments based around a simple echo server. The Erlang-based echo server accepts connections from clients and merely echoes the messages sent to it. To start with, we spawned clients from a single separate machine and logged the number of messages handled by the echo server. We seemed to be approaching a certain limit. Now, if we used two separate client machines to spawn clients, we seemed to be approaching a different limit that appears to be twice the earlier single client machine limit. So, naturally we moved to three separate client machines to spawn clients. The ability to handle client machines shot up sharply and erratically. We would appreciate anyone shedding light on the results of our experiments. We are attaching below the Erlang code for both the echo server and clients. The results are tabulated below. We measured the server load in terms of the number of client messages handled per minute. The machine on which the Erlang-based echo server ran was constant throughout the experiment. No of pkt/min pkt/min Clients (single m/c) (two m/c) ----------------------------------------- 10 123751 237008 20 123754 261100 30 123744 255102 40 123752 252738 60 123757 249248 80 123760 252072 100 123760 252523 150 123766 248284 200 123766 247534 300 123766 253735 500 123766 268661 1000 123765 299774 ----------------------------------------- No of pkt/min Clients (three m/c) ----------------------------------------- 15 505751.9 30 726991 45 915570.4 60 1039133.7 75 1124196.2 90 1157796.3 150 1094065 210 1074785 300 912289.6 600 996473.7 1200 363128.5 ----------------------------------------- Thanks in advance. Vipin Puri -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: echoserv.erl URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: echoclient.erl URL: From fritchie@REDACTED Mon Mar 6 09:15:05 2006 From: fritchie@REDACTED (Scott Lystig Fritchie) Date: Mon, 06 Mar 2006 02:15:05 -0600 Subject: Unhappiness with high TCP accept rates? Message-ID: <200603060815.k268F5AV016096@snookles.snookles.com> I've spent a very puzzling weekend trying to figure out what has been causing a problem with an Erlang application that would get perhaps 100 TCP connection attempts immediately and then new connections every 20-30 milliseconds. The Erlang VM was CPU-bound (very busy, but I had some nasty load generators, so "very busy" was normal)... ... and then the problems began. The weirdest problem is the very, very strange looking "tcpdump" packet captures I was getting. This was captured on the *server* side (server = 192.168.0.165), so all the "TCP Retransmission" stuff doesn't make any sense if the receiving machine can see each packet that was allegedly dropped & retransmitted! Then, making matters worse, the server sends multiple SYN-ACK packets, spaced several seconds apart. When this sort of thing happened, I would see worst-case latency rates for this server shoot up from ~150 milliseconds to 10-30 seconds. When I used the "appmon" to look at the supervisor tree, I would see per-TCP-session processes still alive, even after their TCP socket was supposed to be closed: "netstat -a" on the client side showed nothing, "lsof -p beam-pid" showed that the file descriptors were closed, but Erlang processes were stuck waiting in prim_inet:send/2. Or the server-side socket was still open (confirmed by "netstat" and "lsof") but the client-side was closed, and the Erlang processes stuck waiting in prim_inet:recv0/3. (See below, after the "tethereal" packet summary.) I managed to make this problem go away by: 1. Increasing the gen_tcp:accept/2 {backlog, N} bigger. 2. Changing the priority of the Erlang process doing the accept() work from normal to high. I dunno why this worked, but it does. I was using R10B-9 on a 32-bit Intel box, Red Hat Enterprise Linux 3 Update 4. Do any of these problems sound even vaguely familiar? Or did I merely stumble into a cursed weekend? -Scott --- snip --- snip --- snip --- snip --- snip --- snip --- I can understand why packet 293 didn't get a response: the server's accept(2) backlog was exceeded. But it answers later in packet 16666, except that it doesn't seem really serious about communicating until packet 54751. I've never seen this TCP behavior before. 293 0.306478 192.168.0.166 -> 192.168.0.165 TCP 57463 > 7474 [SYN] Seq=0 Ack=0 Win=5792 Len=0 MSS=1460 16665 3.297657 192.168.0.166 -> 192.168.0.165 TCP 57463 > 7474 [SYN] Seq=0 Ack=0 Win=5792 Len=0 MSS=1460 16666 3.297668 192.168.0.165 -> 192.168.0.166 TCP 7474 > 57463 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1460 16718 3.298198 192.168.0.166 -> 192.168.0.165 TCP 57463 > 7474 [ACK] Seq=1 Ack=1 Win=5792 Len=0 16754 3.299662 192.168.0.166 -> 192.168.0.165 TCP 57463 > 7474 [PSH, ACK] Seq=1 Ack=1 Win=5792 Len=36 17961 3.517092 192.168.0.166 -> 192.168.0.165 TCP [TCP Retransmission] 57463 > 7474 [PSH, ACK] Seq=1 Ack=1 Win=5792 Len=36 20362 3.957024 192.168.0.166 -> 192.168.0.165 TCP [TCP Retransmission] 57463 > 7474 [PSH, ACK] Seq=1 Ack=1 Win=5792 Len=36 25186 4.836903 192.168.0.166 -> 192.168.0.165 TCP [TCP Retransmission] 57463 > 7474 [PSH, ACK] Seq=1 Ack=1 Win=5792 Len=36 34206 6.494619 192.168.0.165 -> 192.168.0.166 TCP 7474 > 57463 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1460 34211 6.494736 192.168.0.166 -> 192.168.0.165 TCP 57463 > 7474 [ACK] Seq=37 Ack=1 Win=5792 Len=0 34784 6.596630 192.168.0.166 -> 192.168.0.165 TCP [TCP Retransmission] 57463 > 7474 [PSH, ACK] Seq=1 Ack=1 Win=5792 Len=36 54088 10.116193 192.168.0.166 -> 192.168.0.165 TCP [TCP Retransmission] 57463 > 7474 [PSH, ACK] Seq=1 Ack=1 Win=5792 Len=36 54728 12.493896 192.168.0.165 -> 192.168.0.166 TCP 7474 > 57463 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1460 54729 12.493957 192.168.0.166 -> 192.168.0.165 TCP 57463 > 7474 [ACK] Seq=37 Ack=1 Win=5792 Len=0 54750 17.154814 192.168.0.166 -> 192.168.0.165 TCP [TCP Retransmission] 57463 > 7474 [PSH, ACK] Seq=1 Ack=1 Win=5792 Len=36 54751 17.154858 192.168.0.165 -> 192.168.0.166 TCP 7474 > 57463 [ACK] Seq=1 Ack=37 Win=5792 Len=0 54756 17.156871 192.168.0.165 -> 192.168.0.166 TCP 7474 > 57463 [ACK] Seq=1 Ack=37 Win=5792 Len=1448 54757 17.156892 192.168.0.165 -> 192.168.0.166 TCP 7474 > 57463 [ACK] Seq=1449 Ack=37 Win=5792 Len=1448 54758 17.156904 192.168.0.165 -> 192.168.0.166 TCP 7474 > 57463 [PSH, ACK] Seq=2897 Ack=37 Win=5792 Len=1448 54759 17.156983 192.168.0.166 -> 192.168.0.165 TCP 57463 > 7474 [ACK] Seq=37 Ack=1449 Win=8688 Len=0 54760 17.157002 192.168.0.165 -> 192.168.0.166 TCP 7474 > 57463 [ACK] Seq=4345 Ack=37 Win=5792 Len=1448 54761 17.157008 192.168.0.165 -> 192.168.0.166 TCP 7474 > 57463 [PSH, ACK] Seq=5793 Ack=37 Win=5792 Len=892 54762 17.157025 192.168.0.166 -> 192.168.0.165 TCP 57463 > 7474 [ACK] Seq=37 Ack=2897 Win=11584 Len=0 54763 17.157051 192.168.0.166 -> 192.168.0.165 TCP 57463 > 7474 [ACK] Seq=37 Ack=4345 Win=14480 Len=0 54764 17.157097 192.168.0.166 -> 192.168.0.165 TCP 57463 > 7474 [ACK] Seq=37 Ack=5793 Win=17376 Len=0 54765 17.157114 192.168.0.166 -> 192.168.0.165 TCP 57463 > 7474 [ACK] Seq=37 Ack=6685 Win=20272 Len=0 54766 17.157142 192.168.0.166 -> 192.168.0.165 TCP 57463 > 7474 [FIN, ACK] Seq=37 Ack=6685 Win=20272 Len=0 54767 17.157829 192.168.0.165 -> 192.168.0.166 TCP 7474 > 57463 [FIN, ACK] Seq=6685 Ack=38 Win=5792 Len=0 54768 17.157892 192.168.0.166 -> 192.168.0.165 TCP 57463 > 7474 [ACK] Seq=38 Ack=6686 Win=20272 Len=0 --- snip --- snip --- snip --- snip --- snip --- snip --- [ Just a few of the processes are shown ... they all looked like one of the two below.] Node: tmp99@REDACTED, Process: <0.18114.0> [{current_function,{prim_inet,send,2}}, {initial_call,{pss_protocol,new_session,2}}, {status,waiting}, {message_queue_len,0}, {messages,[]}, {links,[#Port<0.19360>,<0.186.0>]}, {dictionary,[{random_seed,{1141,15905,3715}},{line_number,0}]}, {trap_exit,false}, {error_handler,error_handler}, {priority,normal}, {group_leader,<0.44.0>}, {heap_size,196418}, {stack_size,10}, {reductions,161939}, {garbage_collection,[{fullsweep_after,65535}]}] Node: tmp99@REDACTED, Process: <0.16327.0> [{current_function,{prim_inet,recv0,3}}, {initial_call,{pss_protocol,new_session,2}}, {status,waiting}, {message_queue_len,0}, {messages,[]}, {links,[<0.186.0>,#Port<0.17416>]}, {dictionary,[{random_seed,{1141,15647,4245}},{line_number,0}]}, {trap_exit,false}, {error_handler,error_handler}, {priority,normal}, {group_leader,<0.44.0>}, {heap_size,233}, {stack_size,11}, {reductions,81}, {garbage_collection,[{fullsweep_after,65535}]}] [ This was the process info of the gen_server that was overseeing all those hung processes.] Node: tmp99@REDACTED, Process: <0.186.0> [{registered_name,pss_sess_mgr}, {current_function,{gen_server,loop,6}}, {initial_call,{proc_lib,init_p,5}}, {status,waiting}, {message_queue_len,0}, {messages,[]}, {links,[<0.6083.0>, <0.8842.0>, <0.13494.0>, <0.18113.0>, <0.18115.0>, <0.18114.0>, <0.16327.0>, <0.11839.0>, <0.13493.0>, <0.13492.0>, <0.8844.0>, <0.11827.0>, <0.8843.0>, <0.7662.0>, <0.8059.0>, <0.8451.0>, <0.8452.0>, <0.8450.0>, <0.8057.0>, <0.8058.0>, <0.7663.0>, <0.7266.0>, <0.7268.0>, <0.7661.0>, <0.7267.0>, <0.6478.0>, <0.6871.0>, <0.6872.0>, <0.6870.0>, <0.6476.0>, <0.6477.0>, <0.6084.0>, <0.2574.0>, <0.4140.0>, <0.4918.0>, <0.5690.0>, <0.5692.0>, <0.6082.0>, <0.5691.0>, <0.5303.0>, <0.5304.0>, <0.5302.0>, <0.4529.0>, <0.4916.0>, <0.4917.0>, <0.4530.0>, <0.4142.0>, <0.4528.0>, <0.4141.0>, <0.3354.0>, <0.3755.0>, <0.3756.0>, <0.3754.0>, <0.3352.0>, <0.3353.0>, <0.2966.0>, <0.2967.0>, <0.2965.0>, <0.1789.0>, <0.2572.0>, <0.2573.0>, <0.2182.0>, <0.2183.0>, <0.2181.0>, <0.1007.0>, <0.1396.0>, <0.1787.0>, <0.1788.0>, <0.1397.0>, <0.1009.0>, <0.1395.0>, <0.1008.0>, <0.231.0>, <0.615.0>, <0.616.0>, <0.614.0>, <0.229.0>, <0.230.0>, <0.46.0>]}, {dictionary,[{'$ancestors',[pss_sup,<0.45.0>]}, {'$initial_call',{gen,init_it, [gen_server, <0.46.0>, <0.46.0>, {local,pss_sess_mgr}, pss_sess_mgr, [pss_protocol], []]}}]}, {trap_exit,true}, {error_handler,error_handler}, {priority,normal}, {group_leader,<0.44.0>}, {heap_size,233}, {stack_size,12}, {reductions,330201}, {garbage_collection,[{fullsweep_after,65535}]}] From mats.cronqvist@REDACTED Mon Mar 6 09:33:27 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Mon, 06 Mar 2006 09:33:27 +0100 Subject: optimization of list comprehensions In-Reply-To: <200603060135.k261ZkNu288015@atlas.otago.ac.nz> References: <200603060135.k261ZkNu288015@atlas.otago.ac.nz> Message-ID: <440BF3D7.5030207@ericsson.com> Richard A. O'Keefe wrote: > I wrote: > > I am not at all happy about (Expr || Pat <- List) because > > (A) it looks like a syntax error; it's quite likely to be 'corrected' > > to use square brackets > > Mats Cronqvist replied: > "looks like a syntax error"? doesn't that depend on what the > syntax is? > > No. It depends on what people *THINK* it is. The term was "looks like", > not "is". > > "corrected" by whom? > > People, what else? Maintenance programmers often fix things that aren't > really broken (sometimes until it _is_ broken; the KMP pattern matcher > is a good example.) They are especially likely to fix things that LOOK > broken. so the maintenance programmers are unable to learn new syntax? given that we train our maintenance programmers from scratch, i don't see that as an issue. > anyway, the use of '(' and ')' is incidental, the point is > to not use '[' and ']' (to make it clear that we're not > returning a list). > > But with everything else so very strongly resembling the syntax for > a list comprehension, the use of round parentheses is NOT enough to > make it CLEAR that we are not returning a list. > > If you really want to be clear, make it > > (do Expr for Pat <- List) > > with the trivial case of iterating over no lists being > > (do Expr) > > surely using the [] notation is even worse in that regard. > > A headache may not be as bad as a toothache; but if I have a toothache > it doesn't help me much to give me a headache as well. true enough. unfortunately, the analogy doesn't help me much since i'm not sure if the round parantheses are affecting your head or your teeth. > > As for combining accumulation with list comprehension, > > one can already write > > lists:foldl(fun (X,Y) -> X+Y end, 0, > > [some list comprehension goes here]) > > which isn't *that* bad. > > i think it is that bad :> > > here's the thing. the list comprehensions were introduced > for no good reason, in that they added nothing that you could > not already do. but they make my life easier because they offer > a much more succinct notation (especially since i have to look > at other people's code a lot). > > I think you just said that making your life easier is not a good reason. no, but i guess i was unclear. i'll try to restate. list comprehensions didn't add any functionality, they just added succinctness. and that turned out to be A Good Thing. therefore, dismissing other features that add succinctness but not functionality is silly. > One really important thing here is that there was *precedent* for > list comprehensions. By the time Erlang got them, list comprehension > was a well established notation (up to choice of punctuation marks) > which had proven itself in several different languages. (Even Prolog > all-solutions" operations are related, and, by the way, always construct > lists.) > > If you except Interlisp and Common Lisp, there *isn't* any precedent > for a comprhenion-cum-fold. So the risk of doing it badly is high; > it's not just a matter of adopting an already-existing notation. And > the evidence so far bears this out: the actual concrete proposals in > this thread are quite astonishingly BAD. yes, it truly is astonishing, the amount of complete and utter drivel one has to wade through. really makes you long for the good old days before arpanet. i feel slightly encouraged that you have to make an exception of Common Lisp though. and perhaps the OTP peoply would be capable of coming up with reasonable notation (even if i'm clearly not). [...] From richardc@REDACTED Sat Mar 4 11:46:15 2006 From: richardc@REDACTED (Richard Carlsson) Date: Sat, 04 Mar 2006 11:46:15 +0100 Subject: Calling a function from command line In-Reply-To: <200603040305.33891.pupeno@pupeno.com> References: <200603040305.33891.pupeno@pupeno.com> Message-ID: <44096FF7.8040209@it.uu.se> Pupeno wrote: > Hello, > I am making a kind of script that generates documentation using edoc. > I need to call edoc:files(["my", "list", "of", "files"], [{so, "me"}, {opt, > "ions"}]) but that can't be called from the shell itself, so, is there a > better way to do it than: > > echo "edoc:files([\"my\", \"list\", \"of\", \"files\"], [{so, \"me\"}, {opt, > \"ions\"}])" | erl Try this: erl -noshell -run edoc_run files '["my","list","of","files"]' \ '[{opt, "ions"}]' -s init stop But if you have organized your files as an application, i suggest: erl -noshell -run edoc_run application $APP '[{opt, "ions"}]' \ -s init stop Check the Makefile for edoc itself for a concrete usage example. /Richard From mickael.remond@REDACTED Mon Mar 6 11:03:57 2006 From: mickael.remond@REDACTED (Mickael Remond) Date: Mon, 6 Mar 2006 11:03:57 +0100 Subject: Unhappiness with high TCP accept rates? In-Reply-To: <200603060815.k268F5AV016096@snookles.snookles.com> References: <200603060815.k268F5AV016096@snookles.snookles.com> Message-ID: <20060306100357.GB11866@memphis.ilius.fr> * Scott Lystig Fritchie [2006-03-06 02:15:05 -0600]: > I've spent a very puzzling weekend trying to figure out what has been > causing a problem with an Erlang application that would get perhaps > 100 TCP connection attempts immediately and then new connections every > 20-30 milliseconds. The Erlang VM was CPU-bound (very busy, but I had > some nasty load generators, so "very busy" was normal)... > > ... and then the problems began. Hello Scott, I am curious to know if the epoll patch can improve this situation ? Did you try it ? The patch can be downloaded from: http://developer.sipphone.com/ejabberd/erlang_epoll_patch/ -- Micka?l R?mond http://www.process-one.net/ From rlenglet@REDACTED Mon Mar 6 11:40:54 2006 From: rlenglet@REDACTED (Romain Lenglet) Date: Mon, 6 Mar 2006 19:40:54 +0900 Subject: Unhappiness with high TCP accept rates? In-Reply-To: <200603060815.k268F5AV016096@snookles.snookles.com> References: <200603060815.k268F5AV016096@snookles.snookles.com> Message-ID: <200603061940.54482.rlenglet@users.forge.objectweb.org> > Do any of these problems sound even vaguely familiar? Or did > I merely stumble into a cursed weekend? Did you try to add more device driver threads with erl's +A option? It may have an influence on the TCP driver... -- Romain LENGLET From robert.virding@REDACTED Mon Mar 6 12:03:22 2006 From: robert.virding@REDACTED (Robert Virding) Date: Mon, 06 Mar 2006 12:03:22 +0100 Subject: optimization of list comprehensions In-Reply-To: <4406F4EC.2060706@hq.idt.net> References: <4406C60B.3030902@telia.com> <4406F4EC.2060706@hq.idt.net> Message-ID: <440C16FA.6040409@telia.com> lists:foldl/3 does what you want, you pass in accumulator which is modified, passed along and finally returned. You also have lists:foldr/ which traverses the list right-to-left, and mapfoldl/r which also returns the mapped list. Robert Serge Aleynikov skrev: > I do want to throw a vote for Mats' suggestion on the alternative syntax: > > (I || I <- List) -> ok > > What I also find limiting is that it's not possible to have an > accumulator when using list comprehension. Perhaps something like this > could also be considered (unless someone can suggest a better syntax): > > [Acc+1, I || Acc, I <- List](0) -> Acc1 > ^ > | > Initial Acc's value > Serge > > Robert Virding wrote: >> I have been thinking about this a bit and I wonder if the constructing >> of the return list really causes any problems. Space-wise it will only >> be as large as the size of the input list, so I wouldn't worry. >> >> Robert >> >> Ulf Wiger (AL/EAB) skrev: >> >>> I've seen many examples of how people use >>> list comprehensions as a form of beautified >>> lists:foreach() - that is, they don't care >>> about the return value of the comprehension. >>> >>> I hesitate to say that it's bad practice, >>> even though one will build a potentially >>> large list unnecessarily, since it's actually looks a lot nicer than >>> using lists:foreach(). >>> >>> Question: would it be in some way hideous >>> to introduce an optimization where such >>> list comprehensions do everything except >>> actually build the list? Then they could >>> execute in constant space. >>> >>> /Ulf W >>> >> > > From ulf.wiger@REDACTED Mon Mar 6 13:15:36 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Mon, 6 Mar 2006 13:15:36 +0100 Subject: early warning - new rdbms Message-ID: The module name is 'rdbms'. I just tried checking it out into a clean directory, and it worked. I took the opportunity to also commit that last patch. /Ulf W > -----Original Message----- > From: Eranga Udesh [mailto:casper2000a@REDACTED] > Sent: den 6 mars 2006 04:29 > To: Ulf Wiger (AL/EAB); erlang-questions@REDACTED > Subject: RE: early warning - new rdbms > > Hi Ulf, > > This is excellent news, thanks. But unfortunately it doesn't > even come when I tried to do CVS download. I downloaded all > the jungerl packages by using "." instead of a module name. > Please check if the check-in is correct? > What's the module name you entered for this project? > > Thanks, > - Eranga > > > > -----Original Message----- > From: Ulf Wiger (AL/EAB) [mailto:ulf.wiger@REDACTED] > Sent: Saturday, March 04, 2006 9:46 PM > To: Eranga Udesh; erlang-questions@REDACTED > Subject: RE: early warning - new rdbms > > > Eranga Udesh wrote: > > > > No New RDBMS yet? > > I finally got around to checking in the new code in CVS (jungerl). > > I messed it up on the first attempt, but hopefully, all is ok > now. It's not visible through the web interface yet. Let me > know if it doesn't work. > > =================================================== > Remember that you have to use the mnesia patches AND > recompile all the other mnesia modules as well (using the > mnesia.hrl in rdbms)!!!! > =================================================== > > I've added a {unique,bool()} attribute on my indexes, and > there's a test suite to "verify" that it works. > I've also added test suites to check that the indexing works > not just for adds, but for update and delete as well. I still > need to check fragmented indexes, and perform a slew of > robustness tests. > > I'm going to be pretty busy with other things for a while, > but I welcome any feedback from the rest of you, including > help with the documentation. (: > > There are also some issues with the make files in jungerl. It > didn't build automatically when I did 'make' at the top > level. I haven't even tried to investigate. Any help with the > make files would be most appreciated. The rdbms.app should be > built automatically, but isn't today (you _can_ build it with > 'builder', though.) > > I'll answer questions and fix bugs as fast as I can. > > Regards, > Ulf W > > > From casper2000a@REDACTED Mon Mar 6 13:51:03 2006 From: casper2000a@REDACTED (Eranga Udesh) Date: Mon, 6 Mar 2006 18:51:03 +0600 Subject: early warning - new rdbms In-Reply-To: Message-ID: <20060306125232.95800400093@mail.omnibis.com> Oh ok. I thought it's an old version, since you said you will upload that to a new module rdbms-1-5. Yes rdbms is there. Thanks, - Eranga -----Original Message----- From: Ulf Wiger (AL/EAB) [mailto:ulf.wiger@REDACTED] Sent: Monday, March 06, 2006 6:16 PM To: Eranga Udesh Cc: erlang-questions@REDACTED Subject: RE: early warning - new rdbms The module name is 'rdbms'. I just tried checking it out into a clean directory, and it worked. I took the opportunity to also commit that last patch. /Ulf W > -----Original Message----- > From: Eranga Udesh [mailto:casper2000a@REDACTED] > Sent: den 6 mars 2006 04:29 > To: Ulf Wiger (AL/EAB); erlang-questions@REDACTED > Subject: RE: early warning - new rdbms > > Hi Ulf, > > This is excellent news, thanks. But unfortunately it doesn't > even come when I tried to do CVS download. I downloaded all > the jungerl packages by using "." instead of a module name. > Please check if the check-in is correct? > What's the module name you entered for this project? > > Thanks, > - Eranga > > > > -----Original Message----- > From: Ulf Wiger (AL/EAB) [mailto:ulf.wiger@REDACTED] > Sent: Saturday, March 04, 2006 9:46 PM > To: Eranga Udesh; erlang-questions@REDACTED > Subject: RE: early warning - new rdbms > > > Eranga Udesh wrote: > > > > No New RDBMS yet? > > I finally got around to checking in the new code in CVS (jungerl). > > I messed it up on the first attempt, but hopefully, all is ok > now. It's not visible through the web interface yet. Let me > know if it doesn't work. > > =================================================== > Remember that you have to use the mnesia patches AND > recompile all the other mnesia modules as well (using the > mnesia.hrl in rdbms)!!!! > =================================================== > > I've added a {unique,bool()} attribute on my indexes, and > there's a test suite to "verify" that it works. > I've also added test suites to check that the indexing works > not just for adds, but for update and delete as well. I still > need to check fragmented indexes, and perform a slew of > robustness tests. > > I'm going to be pretty busy with other things for a while, > but I welcome any feedback from the rest of you, including > help with the documentation. (: > > There are also some issues with the make files in jungerl. It > didn't build automatically when I did 'make' at the top > level. I haven't even tried to investigate. Any help with the > make files would be most appreciated. The rdbms.app should be > built automatically, but isn't today (you _can_ build it with > 'builder', though.) > > I'll answer questions and fix bugs as fast as I can. > > Regards, > Ulf W > > > From serge@REDACTED Mon Mar 6 14:43:19 2006 From: serge@REDACTED (Serge Aleynikov) Date: Mon, 06 Mar 2006 08:43:19 -0500 Subject: optimization of list comprehensions In-Reply-To: <440C16FA.6040409@telia.com> References: <4406C60B.3030902@telia.com> <4406F4EC.2060706@hq.idt.net> <440C16FA.6040409@telia.com> Message-ID: <440C3C77.5060608@hq.idt.net> Thanks Robert, I indeed use lists:foldl/3 quite extensively. However the idea is that if the parser supports list comprehensions, why not allowing for folding as a variation of the comprehension syntax, as this pattern is used as frequently as the basic list comprehension itself? If we can represent: lists:map( fun(I) -> I end, lists:filter( fun(N) -> N > 10 end, lists:seq(1, 20)). like this: [I || I <- lists:seq(1, 20), I > 20]. Why not having a similar concise representation for the following case? lists:foldl( fun(I, Acc) -> Acc+I end, 0, lists:filter( fun(N) -> N > 10 end, lists:seq(1, 20)). Regards, Serge Robert Virding wrote: > lists:foldl/3 does what you want, you pass in accumulator which is > modified, passed along and finally returned. You also have lists:foldr/ > which traverses the list right-to-left, and mapfoldl/r which also > returns the mapped list. > > Robert > > Serge Aleynikov skrev: > >> I do want to throw a vote for Mats' suggestion on the alternative syntax: >> >> (I || I <- List) -> ok >> >> What I also find limiting is that it's not possible to have an >> accumulator when using list comprehension. Perhaps something like >> this could also be considered (unless someone can suggest a better >> syntax): >> >> [Acc+1, I || Acc, I <- List](0) -> Acc1 >> ^ >> | >> Initial Acc's value >> Serge >> >> Robert Virding wrote: >> >>> I have been thinking about this a bit and I wonder if the >>> constructing of the return list really causes any problems. >>> Space-wise it will only be as large as the size of the input list, so >>> I wouldn't worry. >>> >>> Robert >>> >>> Ulf Wiger (AL/EAB) skrev: >>> >>>> I've seen many examples of how people use >>>> list comprehensions as a form of beautified >>>> lists:foreach() - that is, they don't care >>>> about the return value of the comprehension. >>>> >>>> I hesitate to say that it's bad practice, >>>> even though one will build a potentially >>>> large list unnecessarily, since it's actually looks a lot nicer than >>>> using lists:foreach(). >>>> >>>> Question: would it be in some way hideous >>>> to introduce an optimization where such >>>> list comprehensions do everything except >>>> actually build the list? Then they could >>>> execute in constant space. >>>> >>>> /Ulf W From ulf.wiger@REDACTED Mon Mar 6 14:49:08 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Mon, 6 Mar 2006 14:49:08 +0100 Subject: pruning the jungerl? Message-ID: A question. Jungerl is indeed dense and chaotic, but will there be a time when we can remove at least the applications that are now part of OTP? I'm of course thinking primarily of - xmerl - syntax_tools - edoc (We're using ClearCase here at work, and at least in CC, removing old stuff is simple and straight- forward, since the directories are also version- controlled. The old stuff isn't deleted, but you don't have to see it unless you dig for it.) /Ulf W From surindar.shanthi@REDACTED Mon Mar 6 15:19:14 2006 From: surindar.shanthi@REDACTED (Surindar Sivanesan) Date: Mon, 6 Mar 2006 19:49:14 +0530 Subject: incarporating an infinite thread in supervision tree Message-ID: <42ea5fb60603060619i2efdbf0bh8b0246ddba147e7e@mail.gmail.com> Dear all, I has a thread which has a function which call another function every 2 minutes ( infinitely till the timer is stopped). Is it possible to incarporate this functionality in a supervision tree? If so please give me the solution. -- with regards, S.Surindar -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulf.wiger@REDACTED Mon Mar 6 15:23:50 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Mon, 6 Mar 2006 15:23:50 +0100 Subject: new plain_fsm in jungerl Message-ID: I've checked in a new version of plain_fsm into jungerl. The new features are mainly: - a wider selection of start functions - plain_fsm:hibernate(M,F,A), which does what erlang:hibernate(M,F,A) does, but also checks if the module has been upgraded, and automatically calls M:code_change/3 before resuming operation. The parse-transformery has been completely re-written and uses syntax_tools. I recently added some more controlled error handling, once I figured out how to do that (it's not documented, but if you read the code in compile.erl, the hooks are there.) /Ulf W From lennart.ohman@REDACTED Mon Mar 6 15:28:06 2006 From: lennart.ohman@REDACTED (Lennart Ohman) Date: Mon, 6 Mar 2006 15:28:06 +0100 Subject: incarporating an infinite thread in supervision tree In-Reply-To: <42ea5fb60603060619i2efdbf0bh8b0246ddba147e7e@mail.gmail.com> Message-ID: <000701c6412a$2fbb3e30$0600a8c0@st.se> Take a look at: erlang:send_after/3 and let a gen_server handle the incomming message in its handle_info function. Best Regards, Lennart ------------------------------------------------------------- Lennart Ohman office : +46-8-587 623 27 Sjoland & Thyselius Telecom AB cellular: +46-70-552 67 35 Sehlstedtsgatan 6 fax : +46-8-667 82 30 SE-115 28, STOCKHOLM, SWEDEN email : lennart.ohman@REDACTED > -----Original Message----- > From: owner-erlang-questions@REDACTED [mailto:owner-erlang- > questions@REDACTED] On Behalf Of Surindar Sivanesan > Sent: Monday, March 06, 2006 3:19 PM > To: erlang-questions@REDACTED > Subject: incarporating an infinite thread in supervision tree > > > Dear all, > I has a thread which has a function which call another function every 2 > minutes ( infinitely till the timer is stopped). > Is it possible to incarporate this functionality in a supervision tree? If > so please give me the solution. > -- > with regards, > S.Surindar From ulf.wiger@REDACTED Mon Mar 6 17:31:31 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Mon, 6 Mar 2006 17:31:31 +0100 Subject: proc added to jungerl Message-ID: I added an application called 'proc' to jungerl. It's a development based on proc_reg, but sufficiently different that I wanted to rename it. The idea: - 'proc' is a local process registry - proc:reg(Name) registers a unique name (any term) - proc:add_property(Property) registers a property (any term) - proc:fold_names(Fun, Acc, SelectPatterns) allows you to operate on a group of named processes. The fun takes ({Name,Pid},Acc) - proc:fold_properties(...) similar function for folding over properties. For example: proc:fold_properties( fun({_,Pid},Acc) -> [{Pid|sys:get_status(Pid)}|Acc] end, [], [{h248_link_handler,[],[true]}]) would select all processes that have registered themselves as h248 link handler processes, and return the result of sys:get_status(Pid) on each. Another use of this is publish/subscribe. A subscribe() function could add a property to the current process, and notify() could fold over the property and send a message to each subscriber. Each registered process is automatically monitored, of course, and removed from the index if it dies. Those are the key functions. There are many other small utility functions. Essentially, proc becomes an index with which to find processes, by publishing known characteristics of each process. One of the ideas behind proc is to have a way to keep track of processes in very large systems (100s of thousand processes). proc:i/1 works like the i() function, but takes a filter (select patterns on names or properties) defining which processes to include or exclude from the listing. /Ulf W From fritchie@REDACTED Mon Mar 6 20:34:28 2006 From: fritchie@REDACTED (Scott Lystig Fritchie) Date: Mon, 06 Mar 2006 13:34:28 -0600 Subject: Unhappiness with high TCP accept rates? In-Reply-To: Message of "Mon, 06 Mar 2006 11:03:57 +0100." <20060306100357.GB11866@memphis.ilius.fr> Message-ID: <200603061934.k26JYTQs099277@snookles.snookles.com> >>>>> "mr" == Mickael Remond writes: mr> I am curious to know if the epoll patch can improve this situation? No, I haven't. The total number of TCP connections at any instant has been less than 150 during tests. Romain LENGLET asked: rl> Did you try to add more device driver threads with erl's +A rl> option? I do not believe that the inet_drv.c driver is capable of using the Pthread worker pool: the "USE_THREADS" CPP symbol does not appear anywhere in that file. -Scott From ok@REDACTED Mon Mar 6 22:30:24 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Tue, 7 Mar 2006 10:30:24 +1300 (NZDT) Subject: optimization of list comprehensions Message-ID: <200603062130.k26LUOJl295644@atlas.otago.ac.nz> Mats Cronqvist wrote: list comprehensions didn't add any functionality, they just added succinctness. and that turned out to be A Good Thing. therefore, dismissing other features that add succinctness but not functionality is silly. Has anyone actually made such a dismissal? I certainly haven't. After all, one of the arguments for Erlang itself is its conciseness compared with Java (although I must say it is very hard for any programming language not to be more concise than Java). The requirements for any new notation are - that it be readable (so using strange characters is unwise, Smalltalk's use of "^" (historically an up-arrow) for "return" pushes the limits), and using unpronouncable keywords is folly (keywords written backwards didn't succeed in the marketplace of ideas, and apparently even Microsoft have given up on Hungarian notation) - that it not depend on fine distinctions of character shapes (the use of [...] for lists and {...} for tuples is already pushing the limits a bit; the different senses given to "," and ";" in Prolog certainly proved very confusing even for people capable of writing Prolog compilers) - that it not tend to mislead people into believing that it means something else (thus Fortran's use of / for integer division, followed by C and of course Java, was a Bad Idea) - that it have sufficiently high payoff (there must be many occasions to use it, and the effort saved by using it must be great enough). Since programs are *read* more often than they are *written*, the effort that needs to be saved is the effort of *reading* and understanding the notation, not the effort of *writing*. Now we actually have three things we are discussing here: - list comprehensions (and why don't we have tuple comprehensions? Clean has them, and I use them a lot when writing Clean) We have them in Erlang, because of Mnemonsyne. - list walking for side effect (not needed in Haskell because there aren't any side effects) We can get this effect simply by sticking "_ =" in front of a list comprehension (or any other variable name that is clearly not intended to be used again) and having a very slightly smarter compiler. I _hope_ it would not be very useful. - list folding (and again, tuple folding would be nice too) Presumably one of the reasons that Haskell doesn't have this is that Haskell has at least four different versions of fold This one certainly would be useful; the OTP sources could use some kind of fold at least 400 times, and foldl at least 300 times. I think it would be advisable to check several hundred potential uses of the notation before designing a concrete syntax. From ok@REDACTED Mon Mar 6 22:43:21 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Tue, 7 Mar 2006 10:43:21 +1300 (NZDT) Subject: optimization of list comprehensions Message-ID: <200603062143.k26LhLcE296488@atlas.otago.ac.nz> Serge Aleynikov wrote: If we can represent: lists:map(fun(I) -> I end, lists:filter(fun(N) -> N > 10 end, lists:seq(1, 20)). like this: [I || I <- lists:seq(1, 20), I > 10]. Why not have a similar concise representation for the following case? lists:foldl(fun(I, Acc) -> Acc+I end, 0, lists:filter(fun(N) -> N > 10 end, lists:seq(1, 20)). Well, we can improve that: lists:foldl(fun(I, Acc) -> Acc+I end, 0, [I || I <- lists:seq(1, 20), I > 10]) and indeed we can improve it to lists:sum([I || I <- lists:seq(1, 20), I > 10]) which is pretty hard to improve on, no? The Haskell version is more concise: sum [x | x <- [1..20], x > 10] but that's because Haskell has .. notation and doesn't use parentheses or commas nearly as much as Erlang does. From matthias@REDACTED Mon Mar 6 23:13:52 2006 From: matthias@REDACTED (Matthias Lang) Date: Mon, 6 Mar 2006 23:13:52 +0100 Subject: optimization of list comprehensions In-Reply-To: <200603062143.k26LhLcE296488@atlas.otago.ac.nz> References: <200603062143.k26LhLcE296488@atlas.otago.ac.nz> Message-ID: <17420.46112.475768.664042@antilipe.corelatus.se> > Serge Aleynikov wrote: > Why not have a similar concise representation for the following case? > > lists:foldl(fun(I, Acc) -> Acc+I end, 0, > lists:filter(fun(N) -> N > 10 end, > lists:seq(1, 20)). Richard A. O'Keefe writes: > Well, we can improve that: > > lists:foldl(fun(I, Acc) -> Acc+I end, 0, > [I || I <- lists:seq(1, 20), I > 10]) > > and indeed we can improve it to > > lists:sum([I || I <- lists:seq(1, 20), I > 10]) > > which is pretty hard to improve on, no? Oh, I don't know. If short is all that matters, how about: lists:sum([I || I <- lists:seq(11, 20)]). 155 and indeed we can improve it to: lists:sum(lists:seq(11, 20)). 155 which is pretty hard to improve on without entering _indisputably_ useless territory: lists:sum("155"). 155 Matthias From serge@REDACTED Tue Mar 7 00:07:10 2006 From: serge@REDACTED (Serge Aleynikov) Date: Mon, 06 Mar 2006 18:07:10 -0500 Subject: optimization of list comprehensions In-Reply-To: <200603062143.k26LhLcE296488@atlas.otago.ac.nz> References: <200603062143.k26LhLcE296488@atlas.otago.ac.nz> Message-ID: <440CC09E.9050502@hq.idt.net> Richard A. O'Keefe wrote: > Well, we can improve that: > > lists:foldl(fun(I, Acc) -> Acc+I end, 0, > [I || I <- lists:seq(1, 20), I > 10]) > > and indeed we can improve it to > > lists:sum([I || I <- lists:seq(1, 20), I > 10]) > > which is pretty hard to improve on, no? The Haskell version is > more concise: > > sum [x | x <- [1..20], x > 10] Well, you could as well say that one could similarly do: -import(lists, [sum/1, seq/2]). sum([I || I <- seq(1,20), I > 10]). > but that's because Haskell has .. notation and doesn't use parentheses > or commas nearly as much as Erlang does. I agree with your statement in the previous email that it's important not to clutter the language with rediculous notations, yet if list comprehensions are already a part of the language with implementation of map and filter patterns, why not allowing for folding as well, which is probably as frequently used operation as map? Considering more complicated folding examples, I tend to believe that the more complicated/lengthy the function implementation under the fold is, the less it matters whether lists:fold vs. comprehension is used, as most of the mental reading effort is focussed on the folding function itself. -- Serge Aleynikov R&D Telecom, IDT Corp. Tel: (973) 438-3436 Fax: (973) 438-1464 serge@REDACTED From serge@REDACTED Tue Mar 7 00:42:06 2006 From: serge@REDACTED (Serge Aleynikov) Date: Mon, 06 Mar 2006 18:42:06 -0500 Subject: smart exceptions Message-ID: <440CC8CE.7030901@hq.idt.net> 1. Am I correct to say that when using smart exceptions, the beam files in jungerl/lib/smart_exceptions/ebin are only needed at compile-time and not at run-time? 2. Is there a way to use smart_exceptions in order to include function/line info in the uncaught throw exception? $ cat test.erl -module(test). -export([t/1]). t(1) -> A = 1, exit({error, A}); t(2) -> 100 / 0; t(3) -> A = abc, {error, test} = {error, A}; t(4) -> throw({error, test}). $ erlc +'{parse_transform, smart_exceptions}' \ -pa jungerl/lib/smart_exceptions/ebin test.erl # Note: we are not including a -pa argument for smart_exceptions $ erl Erlang (BEAM) emulator version 5.4.12 [source] [hipe] [threads:0] Eshell V5.4.12 (abort with ^G) 1> [(catch test:t(N)) || N <- [1,2,3,4]]. [{'EXIT',{{test,t,1},{line,6},{error,1}}}, {'EXIT',{{test,t,1}, {line,9}, {badarith,[{test,t,1}, {erl_eval,do_apply,5}, {erl_eval,expr,5}, {erl_eval,eval_lc1,5}, {lists,flatmap,2}, {lists,flatmap,2}, {erl_eval,eval_lc,6}, {shell,exprs,6}]}, '/', [100,0]}}, {'EXIT',{{test,t,1},{line,13},match,[{error,abc}]}}, {error,test}] 2> q(). -- Serge Aleynikov R&D Telecom, IDT Corp. Tel: (973) 438-3436 Fax: (973) 438-1464 serge@REDACTED From ok@REDACTED Tue Mar 7 01:06:00 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Tue, 7 Mar 2006 13:06:00 +1300 (NZDT) Subject: optimization of list comprehensions Message-ID: <200603070006.k27060ki296681@atlas.otago.ac.nz> Serge Aleynikov wrote: I agree with your statement in the previous email that it's important not to clutter the language with rediculous (sic.) notations, yet if list comprehensions are already a part of the language with implementation of map and filter patterns, why not allowing for folding as well, which is probably as frequently used operation as map? I have already explained that in some detail. Because the combination of map and filter requires only *SINGLE* bindings for patterns, whereas folding requires *DOUBLE* bindings for patterns, and there are no other constructs anywhere in Erlang that do double binding. Considering more complicated folding examples, I tend to believe that the more complicated/lengthy the function implementation under the fold is, the less it matters whether lists:fold vs. comprehension is used, as most of the mental reading effort is focussed on the folding function itself. Exactly so. The simple ones (sum, min, max) already have names of their own. Take this example from the OTP sources, which I have reindented: case (catch lists:foldl(fun(X, Acc) -> case file:raw_read_file_info(filename:join([X, ModuleFile])) of {ok,_} -> Y = filename:split(X), throw(filename:join(lists:sublist(Y, length(Y)-1)++["priv"])) ; _ -> Acc end end, false, P)), of false -> exit(uds_dist_priv_lib_indeterminate) ; _ -> Pd end Not a very inspiring example, but the first one I came across. It's a rather trivial fold: it either threads 'false' all the way through to the end or raises an exception; if 'false' gets through then it exits otherwise it returns the new name. This particular case would be much clearer using a separate function: ... find_full_name(P, ModuleFile) ... find_full_name([], _) -> exit(uds_dist_priv_lib_indeterminate); find_full_name([X|Xs], ModuleFile) -> case file:raw_read_file_info(filename:join([X,ModuleFile])) of {ok,_} -> Y = filename:split(X), filename:join(lists:sublist(Y, length(Y)-1) ++ ["priv"]) ; _ -> find_full_name(Xs, ModuleFile) end. Here's another one from OTP: NewS = foldl(fun(App, S1) -> case get_loaded(App) of {true, _} -> S1 ; false -> case do_load_application(App, S1) of {ok, S2} -> S2 ; Error -> throw(Error) end end end, S, IncApps), In fact, I spent about 20 minutes looking at foldl calls in the OTP sources, and *most* of them had complex functions and list arguments that were just simple variable names. I saw a literal list and a few calls to reverse() or delete() or sort() with simple variable arguments and record field extraction, but mostly simple variables. Here's the only complex example I found in those 20 minutes: find_modules(P, [Path|Paths], Ext, S0) -> case file:list_dir(filename:join(Path, P)) of {ok, Fs} -> S1 = lists:foldl( fun(F, S) -> sets:add_element(filename:rootname(F, Ext), S) end, S0, [F || F <- Fs, filename:extension(F) =:= Ext]), find_modules(P, Paths, Ext, S1) ; _ -> find_modules(P, Paths, Ext, S0) end; find_modules(_P, [], _Ext, S) -> sets:to_list(S). The best notation I've been able to come up with for folding would shrink the call to lists:foldl/3 by 2 lines: let S1 = S0 then sets:add_element(filename:rootname(F, Ext), S1) for F <- Fs, filename:extension(F) =:= Ext in find_modules(P, Paths, Ext, S1) But then the whole function would shrink from 13 lines to 11 lines, which isn't _that_ much of a shrinkage. And remember, this is the most complicated (no, the _only_ complicated) example I found in a sample of 50 kSLOC of Erlang source code. A notation which saves you 2 SLOC out of 50 kSLOC is not _hugely_ beneficial... So my earlier estimate of the number of places where a list folding notation could be used in the OTP sources must not be mistaken for an estimate of the number of places where such a notation would in fact improve the readability of the code, because the part of the expression that would be simplified is already as simple as it can be. In short, while Serge Aleynikov is justified in saying that "folding ... is probably as frequently used [an] operation as map", he is _not_ justified in concluding that a special purpose notation for folding would be as advantageous as a special purpose notation for filter+map. Now, that's just one sample. It's up to the proponents of a new notation to provide other samples showing that the notation _would_ pay off. And in fairness, I must admit that if such a notation did exist, code might be written differently to take more advantage of it. But that's just speculation. From sanjaya@REDACTED Tue Mar 7 08:12:15 2006 From: sanjaya@REDACTED (Sanjaya Vitharana) Date: Tue, 7 Mar 2006 13:12:15 +0600 Subject: mnesia_frag, mnesia:restore, {aborted, nested_transaction} Message-ID: <00f601c641b6$769f4300$5f00a8c0@wavenet.lk> Hi ... All, Below transection failed with {aborted, nested_transaction} on fragmented table. RestoreFun = fun(DbRestoreFile) -> mnesia:restore(DbRestoreFile,[]) end, mnesia:activity(transaction, RestoreFun, [DBRestoreFileName], mnesia_frag) ------------------------ But ..... mnesia:restore(DBRestoreFileName,[]) works fine WITHOUT mnesia:activity(transaction,...,mnesia_frag) ------------------------ I have used mnesia:activity(transaction,...,mnesia_frag) for all the transections on my fagmented table, even for mnesia:backup (as below) BackupFun = fun(DbBackupFile) -> mnesia:backup(DbBackupFile) end, case mnesia:activity(transaction, BackupFun, [DbBckFile], mnesia_frag) of ----------------------- 1.) so ...restoring WITHOUT mnesia:activity(transaction,...,mnesia_frag) will make any difference in this situation ?? 2.) why in firstcase it gives {aborted, nested_transaction} ???? Can't we use the mnesia:restore with mnesia:activity(transaction,..., mnesia_frag) ??? Thanks in advance Sanjaya Vitharana -------------- next part -------------- An HTML attachment was scrubbed... URL: From tzheng@REDACTED Tue Mar 7 03:23:05 2006 From: tzheng@REDACTED (Tony Zheng) Date: Mon, 06 Mar 2006 18:23:05 -0800 Subject: SSL Setup Problem Message-ID: <1141698185.4753.8.camel@gateway> Hi luvish I met the same problem about SSL. Would you mind telling me how you can solve it? Thanks. tony >hi chandru, > thanks for the reply. ssl:socket() was a mistake, I'm >actually using ssl:listen() but still I've not been >able to get it working properly. I don't know where >i'm >missing out. > As soon as i start my server, the error is thrown, >which i included in my last mail, showing that >gen_server is terminating. if you can suggest some >other solution, please help me. >thanks >/luvish >--- Chandrashekhar Mullaparthi > wrote: > Hi, > > On 4 Jul 2005, at 09:36, luvish satija wrote: > > > Hello all, > > > > I am trying to set up ssl for erlang on my > system > > but facing some problems. I am using the following > > .rel file for making boot script. > > > > {release, {"OTP APN 181 01","R10B"}, {erts, > "5.4.6"}, > > [{kernel,"2.10.7"}, > > {stdlib,"1.13.7"}, > > {ssl, "3.0.5"}]}. > > > > Now, when I try to call make_script on this, I > get > > the following warnings: > > > > *WARNING* ssl: Source code not found: > 'SSL-PKIX'.erl > > > This is ok - the systools:make_script looks for > source files under > App/src directory. In this case, the source files > are in a non-standard > directory which is why you see these error messages. > > > > > If I igonore these warnings and start the ssl > > system using the produced boot script, then I am > not > > able to listen on any port using ssl:socket() and > > following error is thrown: > > You should be listening using ssl:listen/2 - > ssl:socket isn't even an > exported function. > > cheers > Chandru > > From chandrashekhar.mullaparthi@REDACTED Tue Mar 7 09:15:29 2006 From: chandrashekhar.mullaparthi@REDACTED (chandru) Date: Tue, 7 Mar 2006 08:15:29 +0000 Subject: mnesia_frag, mnesia:restore, {aborted, nested_transaction} In-Reply-To: <00f601c641b6$769f4300$5f00a8c0@wavenet.lk> References: <00f601c641b6$769f4300$5f00a8c0@wavenet.lk> Message-ID: On 07/03/06, Sanjaya Vitharana wrote: > > Hi ... All, > > Below transection failed with {aborted, nested_transaction} on fragmented > table. > > RestoreFun = fun(DbRestoreFile) -> > mnesia:restore(DbRestoreFile,[]) > end, > > mnesia:activity(transaction, RestoreFun, [DBRestoreFileName], mnesia_frag) > You can't put a schema transaction within a normal transaction. mnesia:restore basically restores your schema along with other tables from the backup file. cheers Chandru From ulf.wiger@REDACTED Tue Mar 7 09:17:59 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Tue, 7 Mar 2006 09:17:59 +0100 Subject: optimization of list comprehensions Message-ID: Richard A. O'Keefe wrote: > In fact, I spent about 20 minutes looking at foldl calls in > the OTP sources, and *most* of them had complex functions and > list arguments that were just simple variable names. Let me then provide a more interesting example. Based on (and supposedly equivalent to) real-life code, but never compiled nor tested: mk_tops(TIds1, TIds2, Dir, SId) when is_list(TIds1), is_list(TIds2) -> [#ctxtTop{frTId = TId1, toTId = TId2, dir = Dir, strId = SId} || TId1 <- TIds1, TId2 <- TIds2, TId1 =/= TId2]. expand(Tops, TIds) -> lists:foldr( fun(#ctxtTop{frTId = '*', toTId = '*', dir = Dir, strId = SId}, Acc) -> Acc ++ mk_tops(TIds, TIds, Dir, Sid); (#ctxtTop{frTId = T1, toTId = '*', dir = Dir, strId = SId}, Acc) -> Acc ++ mk_tops([T1], TIds, Dir, SId); (#ctxtTop{frTId = '*', toTId = T1, dir = Dir, strId = SId}, Acc) -> Acc ++ mk_tops(TIds, [T1], Dir, SId) end, [], Tops). or, rewritten without ++: expand(Tops, TIds) -> lists:foldl( fun(#ctxtTop{frTId = '*', toTId = '*', dir = Dir, strId = SId}, Acc) -> mk_tops(TIds, TIds, Dir, Sid, Acc); (#ctxtTop{frTId = T1, toTId = '*', dir = Dir, strId = SId}, Acc) -> mk_tops([T1], TIds, Dir, SId, Acc); (#ctxtTop{frTId = '*', toTId = T1, dir = Dir, strId = SId}, Acc) -> mk_tops(TIds, [T1], Dir, SId, Acc) end, [], Tops). mk_tops(TIds1, TIds2, Dir, SId, Acc) -> lists:foldl( fun(TId1, Acc1) -> lists:foldl( fun(TId2, Acc11) when TId1 =/= TId2 -> [#ctxtTop{frTId = TId1, toTId = TId2, dir = Dir, strId = SId} | Acc11] end, Acc1, TIds2) end, Acc, TIds1). I have no own idea on how to improve it, with or without folding lcs. Regards, Ulf W From ulf.wiger@REDACTED Tue Mar 7 11:34:35 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Tue, 7 Mar 2006 11:34:35 +0100 Subject: updates to new rdbms Message-ID: I've checked in a bug fix of rdbms_index, and added some utility functions for indexing in rdbms.erl (1) ix(Value, []) -> [Value]. (2) ix_list(Value, []) when is_list(Value) -> Value. (3) ix_vals(Obj, PosL) -> [[element(P,Obj) || P <- PosL]]. The idea is to not have to write these functions over and over (or at least once!) (1) makes a simple value index. You can still make it ordered, unique, etc. (2) expands a list of values and makes each list element an index value (3) is like the snmp indexes. Combine several attribute values into one index value. Example of how to specify them (from the test suite): rdbms:create_table( ix_list, [{disc_copies, [node()]}, {attributes, [key, value]}, {rdbms, [ {indexes, [{{value,ix},rdbms,ix_list,[], [{type,ordered}]}]} ]} ]), and how to use them: trans(fun() -> mnesia:write({ix_list, 1, [a,b,c]}), mnesia:write({ix_list, 2, [c,d,e]}), mnesia:write({ix_list, 3, [c,d,e]}) end), trans(fun() -> [{ix_list,1,[a,b,c]}] = mnesia:index_read(ix_list,a,{value,ix}), [{ix_list,1,[a,b,c]}, {ix_list,2,[c,d,e]},{ix_list,3,[c,d,e]}] = mnesia:index_read(ix_list,c,{value,ix}), [{ix_list,2,[c,d,e]},{ix_list,3,[c,d,e]}] = mnesia:index_read(ix_list,d,{value,ix}) end), /Ulf W From ulf.wiger@REDACTED Tue Mar 7 09:22:37 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Tue, 7 Mar 2006 09:22:37 +0100 Subject: FW: [Felix-language] Copy of post to comp.lang.c++ Message-ID: I still haven't gotten around to actually testing Felix together with erlang yet, but it's still on my list. Here's a marketing blurb sent to comp.lang.c++. -----Original Message----- From: felix-language-admin@REDACTED [mailto:felix-language-admin@REDACTED] On Behalf Of skaller Sent: den 7 mars 2006 06:54 To: felix-language@REDACTED Subject: [Felix-language] Copy of post to comp.lang.c++ Subject: Felix 1.1.2 Release Candidate 4 From: skaller Newsgroups: comp.lang.c++ Date: Tue, 07 Mar 2006 16:21:08 +1100 The Felix project requires people to help test version 1.1.2 release candidate 4 source build. Please go to http://felix.sourceforge.net if you're interested in helping. The licence is FFAU (like BSD or Boost licences). You will need Ocaml 3.08/9 and Python 2.x plus a C++ compiler to build from source. GNU g++ and MSVC++ are supported (and we'll add support for any other compiler if there is a volunteer maintainer). We also need C++ programmers interested in wrapping their favourite C and C++ libraries to create and maintain bindings. Support for SDL, OpenGL and gmp++ is currently provided. Felix generates 'almost ISO C++', which should compile on most platforms. The build system is entirely written in Python and does not require any of the usual build tools (no make, autoconf or whatever) What is Felix? It's a new programming language specifically designed as an upgrade path from C++, in the same way as C++ is an upgrade path from C. It preserves both object and source compatibility with C++, although the mechanism is different -- Felix replaces the C++ type system and syntax, and so it is a distinct language in its own right. Felix features built in garbage collection, first class functions and procedures, high performance user space threading, parametric polymorphism, variants, pre-emptive threading support, and now asynchronous I/O including support for timers and sockets transparently using the fastest available demultiplexing technique available on your platform (i.e. epoll on Linux, kqueue on BSD and OSX, IO completion ports on Windows) The scripting harness works much like Python or Perl does: you just say flx my/program and it compiles the program to C++, compiles the C++ to a shared library, links in all required extra libraries automatically, and runs your program. In other words it works like a scripting language but performs like the best native code binaries. Although the compiler is fairly mature .. the build system and async support is brand new so you can expect some teething problems. If you're into the bleeding edge and want a high level of abstraction without sacrificing performance, this product is for you! -- John Skaller Async PL, Realtime software consultants Checkout Felix: http://felix.sourceforge.net ------------------------------------------------------- This SF.Net email is sponsored by xPML, a groundbreaking scripting language that extends applications into web and mobile media. Attend the live webcast and join the prime developer group breaking into this new coding territory! http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 _______________________________________________ Felix-language mailing list Felix-language@REDACTED https://lists.sourceforge.net/lists/listinfo/felix-language From mats.cronqvist@REDACTED Tue Mar 7 12:14:23 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Tue, 07 Mar 2006 12:14:23 +0100 Subject: optimization of list comprehensions In-Reply-To: <200603070006.k27060ki296681@atlas.otago.ac.nz> References: <200603070006.k27060ki296681@atlas.otago.ac.nz> Message-ID: <440D6B0F.20106@ericsson.com> Richard A. O'Keefe wrote: [many interesting observations deleted] > It's up to the proponents of a new notation > to provide other samples showing that the notation _would_ pay off. And > in fairness, I must admit that if such a notation did exist, code might > be written differently to take more advantage of it. But that's just > speculation. indeed. in the olden days, you saw lots of code like this; goo(X) -> lists:reverse(goo(X,[])). goo([],A) -> A; goo([H|T],A) when is_integer(H) -> goo(T,[H|A]); goo([_|T],A) -> goo(T,A). which most sane people would now write like this; goo(X) -> [I || I <- X,is_integer(I)]. interestingly, one almost never saw this (or variations thereof); goo(X) -> lists:flatmap(fun(I) when is_integer(I)->[I];(_)->[] end, X). similarly, this; foo(Xs) -> foo(Xs,root). foo([],A) -> A; foo([X|T],A) when is_atom(X) -> foo(T,{A,X}); foo([_|T],A) -> foo(T,A). would look a lot better like this (using Per Gustafsson's syntax; http://www.erlang.org/ml-archive/erlang-questions/200603/msg00034.html) foo(Xs) -> ({A,X} || X<-Xs, A<--root, is_atom(X)). i reject the argument "there is noo need for new syntax since one can already do this"; foo(Xs) -> lists:foldl(fun(X,A) when is_atom(X) -> {A,X}; (_,A) -> A end, root,Xs). firstly; it is actually a lot more difficult to read (although the LOC is the same). secondly; for whatever reason normal industry programmers(*) will rarely if ever use lists:fold* (or indeed anything that involves funs). they didn't use lists:map or lists:foreach either, but they do use list comprehensions. * by definition, anyone who writes erlang for a living and doesn't read this. mats From mats.cronqvist@REDACTED Tue Mar 7 12:17:11 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Tue, 07 Mar 2006 12:17:11 +0100 Subject: optimization of list comprehensions In-Reply-To: <200603062130.k26LUOJl295644@atlas.otago.ac.nz> References: <200603062130.k26LUOJl295644@atlas.otago.ac.nz> Message-ID: <440D6BB7.9020202@ericsson.com> Richard A. O'Keefe wrote: > [deleted stuff] i can only say that i agree wholeheartedly with this. mats > Now we actually have three things we are discussing here: > > - list comprehensions (and why don't we have tuple comprehensions? > Clean has them, and I use them a lot when writing Clean) > > We have them in Erlang, because of Mnemonsyne. > > - list walking for side effect (not needed in Haskell because there > aren't any side effects) > > We can get this effect simply by sticking "_ =" in front of a > list comprehension (or any other variable name that is clearly > not intended to be used again) and having a very slightly smarter > compiler. I _hope_ it would not be very useful. > > - list folding (and again, tuple folding would be nice too) > Presumably one of the reasons that Haskell doesn't have this is > that Haskell has at least four different versions of fold > > This one certainly would be useful; the OTP sources could use some > kind of fold at least 400 times, and foldl at least 300 times. > > I think it would be advisable to check several hundred potential > uses of the notation before designing a concrete syntax. From jaiswal.vikash@REDACTED Tue Mar 7 13:26:50 2006 From: jaiswal.vikash@REDACTED (jaiswal.vikash@REDACTED) Date: Tue, 7 Mar 2006 17:56:50 +0530 Subject: node not getting created Message-ID: Hello , I'm facing a problem with epmd process . From one process when I'm spawning a second process , most of the time the second process is getting spawned . But in some occassions the Second process is not getting spawned at all . I am not able to establish any connection with the node of the second process . At this point I'm getting an error PORT2_RESP(error) for second process ( This I can see by running the epmd process in debug mode ) . Now if I kill the epmd process and reboot the system the problem gets solved . I would like to know why the error comes up and why does it come up only once in a while ? Also , is there any solution other than killing the epmd process and rebooting the system ? Thanks and regards , Vikash Kr. Jaiswal The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From chandrashekhar.mullaparthi@REDACTED Tue Mar 7 14:20:11 2006 From: chandrashekhar.mullaparthi@REDACTED (chandru) Date: Tue, 7 Mar 2006 13:20:11 +0000 Subject: node not getting created In-Reply-To: References: Message-ID: Hi, On 07/03/06, jaiswal.vikash@REDACTED wrote: > > Hello , > > I'm facing a problem with epmd process . From one process when I'm > spawning a second process , most of the time the second process is getting > spawned . But in some occassions the Second process is not getting spawned > at all . I am not able to establish any connection with the node of the > second process . At this point I'm getting an error PORT2_RESP(error) for > second process ( This I can see by running the epmd process in debug mode ) > . Now if I kill the epmd process and reboot the system the problem gets > solved . > I would like to know why the error comes up and why does it come up only > once in a while ? Also , is there any solution other than killing the epmd > process and rebooting the system ? It isn't quite clear what your problem is. epmd is only concerned with nodes which are started with a name. It enables distribution by acting as a "name server". When you say, you are spawning a second process do you really mean an erlang process or are you spawning another erlang node? Try rephrasing your problem. cheers Chandru From klacke@REDACTED Tue Mar 7 14:51:49 2006 From: klacke@REDACTED (Claes Wikstrom) Date: Tue, 07 Mar 2006 14:51:49 +0100 Subject: updates to new rdbms In-Reply-To: References: Message-ID: <440D8FF5.8060706@hyber.org> Ulf Wiger (AL/EAB) wrote: > I've checked in a bug fix of rdbms_index, > ..... Sometimes one wish that ones MUA had a feature whereby a mail thread/topic could be marked as "never ever show me mail in this thread .. ever again" /klacke -- Claes Wikstrom -- Caps lock is nowhere and http://www.hyber.org -- everything is under control cellphone: +46 70 2097763 From ulf.wiger@REDACTED Tue Mar 7 15:02:36 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Tue, 7 Mar 2006 15:02:36 +0100 Subject: updates to new rdbms Message-ID: If that's a widely held opinion, I can of course take the discussion elsewhere. (Or perhaps you didn't mean just this thread, since there have certainly been longer threads that may not have had universal appeal?) Except there isn't any obvious "elsewhere", since the trapexit forums are down. Creating different mailing lists on the jungerl sourceforge project would be one option. I welcome any suggestions/comments. /Ulf W > -----Original Message----- > From: Claes Wikstrom [mailto:klacke@REDACTED] > Sent: den 7 mars 2006 14:52 > To: Ulf Wiger (AL/EAB) > Cc: erlang-questions@REDACTED > Subject: Re: updates to new rdbms > > Ulf Wiger (AL/EAB) wrote: > > I've checked in a bug fix of rdbms_index, > > ..... > > Sometimes one wish that ones MUA had a feature whereby a mail > thread/topic could be marked as > > "never ever show me mail in this thread .. ever again" > > > /klacke > > > -- > Claes Wikstrom -- Caps lock is nowhere and > http://www.hyber.org -- everything is under control > cellphone: +46 70 2097763 > From rpettit@REDACTED Tue Mar 7 15:26:45 2006 From: rpettit@REDACTED (Rick Pettit) Date: Tue, 7 Mar 2006 08:26:45 -0600 Subject: updates to new rdbms In-Reply-To: References: Message-ID: <20060307142645.GA29845@vailsys.com> On Tue, Mar 07, 2006 at 03:02:36PM +0100, Ulf Wiger (AL/EAB) wrote: > > If that's a widely held opinion, I > can of course take the discussion elsewhere. > (Or perhaps you didn't mean just this thread, > since there have certainly been longer threads > that may not have had universal appeal?) > > Except there isn't any obvious "elsewhere", > since the trapexit forums are down. > > Creating different mailing lists on the jungerl > sourceforge project would be one option. > > I welcome any suggestions/comments. If you stop mailing the list, then people with interest won't hear (er, read) what you have to say (er, write). On the other hand, if you continue with the thread and some people decide they are no longer interested, it should be trivial (at least possible) to filter on their end. I personally don't mind the thread, and in general tend to learn from emails by Ulf, Klacke, et al. -Rick > /Ulf W > > > -----Original Message----- > > From: Claes Wikstrom [mailto:klacke@REDACTED] > > Sent: den 7 mars 2006 14:52 > > To: Ulf Wiger (AL/EAB) > > Cc: erlang-questions@REDACTED > > Subject: Re: updates to new rdbms > > > > Ulf Wiger (AL/EAB) wrote: > > > I've checked in a bug fix of rdbms_index, > > > ..... > > > > Sometimes one wish that ones MUA had a feature whereby a mail > > thread/topic could be marked as > > > > "never ever show me mail in this thread .. ever again" > > > > > > /klacke > > > > > > -- > > Claes Wikstrom -- Caps lock is nowhere and > > http://www.hyber.org -- everything is under control > > cellphone: +46 70 2097763 > > From anders.nygren@REDACTED Tue Mar 7 15:40:41 2006 From: anders.nygren@REDACTED (anders) Date: Tue, 7 Mar 2006 08:40:41 -0600 Subject: proc added to jungerl In-Reply-To: References: Message-ID: <200603070840.42162.anders.nygren@telteq.com.mx> On Monday 06 March 2006 10:31, Ulf Wiger (AL/EAB) wrote: > I added an application called 'proc' > to jungerl. While it looks like an interesting application, I find it hard to use when it is released with these terms. "Copyright ?? Ericsson AB 2005 All rights reserved. The information in this document is the property of Ericsson. Except as specifically authorized in writing by Ericsson, the receiver of this document shall keep the information contained herein confidential and shall protect the same in whole or in part from disclosure and dissemination to third parties. Disclosure and disseminations to the receivers employees shall only be made on a strict need to know basis." I hope that was a mistake that will be fixed by changing the terms. /Anders Nygren From ulf.wiger@REDACTED Tue Mar 7 16:15:10 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Tue, 7 Mar 2006 16:15:10 +0100 Subject: proc added to jungerl Message-ID: A mistake. New versions checked in, now with EPL copyrights. /U > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of anders > Sent: den 7 mars 2006 15:41 > To: erlang-questions@REDACTED > Subject: Re: proc added to jungerl > > On Monday 06 March 2006 10:31, Ulf Wiger (AL/EAB) wrote: > > I added an application called 'proc' > > to jungerl. > While it looks like an interesting application, I find it > hard to use when it is released with these terms. > > "Copyright ?? Ericsson AB 2005 All rights reserved. The > information in this document is the property of Ericsson. > Except as specifically authorized in writing by Ericsson, the > receiver of this document shall keep the information > contained herein confidential and shall protect the same in > whole or in part from disclosure and dissemination to third > parties. Disclosure and disseminations to the receivers > employees shall only be made on a strict need to know basis." > > I hope that was a mistake that will be fixed by changing the terms. > > /Anders Nygren > From thomasl_erlang@REDACTED Tue Mar 7 16:27:41 2006 From: thomasl_erlang@REDACTED (Thomas Lindgren) Date: Tue, 7 Mar 2006 07:27:41 -0800 (PST) Subject: smart exceptions In-Reply-To: <440CC8CE.7030901@hq.idt.net> Message-ID: <20060307152741.97690.qmail@web34409.mail.mud.yahoo.com> --- Serge Aleynikov wrote: > 1. Am I correct to say that when using smart > exceptions, the beam files > in jungerl/lib/smart_exceptions/ebin are only needed > at compile-time and > not at run-time? With the "normal" use of smart exceptions, you only need the transform at compile time. (Theoretically, you can also make it plug in other stuff to do at an error, but this hasn't really been used.) > 2. Is there a way to use smart_exceptions in order > to include > function/line info in the uncaught throw exception? I'm not convinced that's a good default behaviour, since the thrown value is sometimes used as a valid return value. Changing a thrown value may then lead to amusing runtime problems as pattern matches fail. If you still want to do it, hack the code so that throw/1 is rewritten like exit/1. Eh, jungerl seems to be very slow, so I can't show you directly ... but it should basically be a matter of copy+paste+modify a line where a call to exit(Rsn) is detected and rewriting the copied line to do the same with throw. Note that the code does this rewrite in two ways, depending on a compile-time flag. It's pretty straightforward, just be sure that you are getting your modified version. PS. OTP, not that I know if you're thinking about it, but if you want to introduce line numbers etc. in exceptions, please mail me before you start hacking. There are a couple of things to think about, and do, to get it right, or, at least, to get it much more right than smart_exceptions. Best, Thomas __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From chsu79@REDACTED Tue Mar 7 16:33:07 2006 From: chsu79@REDACTED (Christian S) Date: Tue, 7 Mar 2006 16:33:07 +0100 Subject: updates to new rdbms In-Reply-To: References: Message-ID: 2006/3/7, Ulf Wiger (AL/EAB) : > > > If that's a widely held opinion, I > can of course take the discussion elsewhere. > (Or perhaps you didn't mean just this thread, > since there have certainly been longer threads > that may not have had universal appeal?) > > Except there isn't any obvious "elsewhere", > since the trapexit forums are down. > > Creating different mailing lists on the jungerl > sourceforge project would be one option. > > I welcome any suggestions/comments. > > blog about it and get it on planeterlang.org? -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomasl_erlang@REDACTED Tue Mar 7 16:57:00 2006 From: thomasl_erlang@REDACTED (Thomas Lindgren) Date: Tue, 7 Mar 2006 07:57:00 -0800 (PST) Subject: optimization of list comprehensions In-Reply-To: Message-ID: <20060307155700.94860.qmail@web34404.mail.mud.yahoo.com> --- "Ulf Wiger (AL/EAB)" wrote: > Richard A. O'Keefe wrote: > > > In fact, I spent about 20 minutes looking at foldl > calls in > > the OTP sources, and *most* of them had complex > functions and > > list arguments that were just simple variable > names. > > Let me then provide a more interesting example. > Based on (and supposedly equivalent to) real-life > code, > but never compiled nor tested: > > mk_tops(TIds1, TIds2, Dir, SId) > when is_list(TIds1), is_list(TIds2) -> > [#ctxtTop{frTId = TId1, toTId = TId2, > dir = Dir, strId = SId} || > TId1 <- TIds1, > TId2 <- TIds2, > TId1 =/= TId2]. This operation is O(mn) for m TIds1 and n TIds2. As used below, O(n) or O(n^2). > expand(Tops, TIds) -> > lists:foldr( > fun(#ctxtTop{frTId = '*', toTId = '*', dir = > Dir, strId = SId}, Acc) > -> > Acc ++ mk_tops(TIds, TIds, Dir, Sid); > (#ctxtTop{frTId = T1, toTId = '*', dir = Dir, > strId = SId}, Acc) > -> > Acc ++ mk_tops([T1], TIds, Dir, SId); > (#ctxtTop{frTId = '*', toTId = T1, dir = Dir, > strId = SId}, Acc) > -> > Acc ++ mk_tops(TIds, [T1], Dir, SId) > end, [], Tops). ... > I have no own idea on how to improve it, with or > without > folding lcs. The outermost loop looks like it could be written with foldl instead to get rid of the appending. Something like this: lists:foldl( fun(..., Acc) -> mk_tops(From, To, Dir, SId, Acc) ; ... end, [], Tops) and mk_tops(TIds1, TIds2, Dir, SId, Acc) when is_list(TIds1), is_list(TIds2) -> [#ctxtTop{frTId = TId1, toTId = TId2, dir = Dir, strId = SId} || TId1 <- TIds1, TId2 <- TIds2, TId1 =/= TId2] ++ Acc. (mk_tops now has a ++, but I believe the beam compiler gets rid of this special case. If it doesn't, you can perhaps get around it with a final flatten.) The mk_tops function is O(n^2) in itself, for n TIds, so that might be the dominant cost; impossible to say without knowing more about the inputs. But since the worst-case output is a list of length roughly n^2, there is not a lot of slack; a better solution would have to change the data representation. For example, when '*' appears multiple times, many list elements will have the same frTId and/or toTId fields but different dir and strTId fields. Maybe this can be exploited to share the work, e.g., if you instead return {collection, FromTo_pairs, Dir, StrTId} then the FromTo_pairs can be shared by several "instances" that differ in Dir or StrTId: so, while traversing the list, check if (*,*) or (T,*) or whatever has already been computed and reuse/share the computed pairs if it has. Of course, such a solution seems complex enough that it might be worth waiting for trouble to turn up before fighting it :-) Best, Thomas __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From serge@REDACTED Tue Mar 7 16:57:24 2006 From: serge@REDACTED (Serge Aleynikov) Date: Tue, 07 Mar 2006 10:57:24 -0500 Subject: smart exceptions In-Reply-To: <20060307152741.97690.qmail@web34409.mail.mud.yahoo.com> References: <20060307152741.97690.qmail@web34409.mail.mud.yahoo.com> Message-ID: <440DAD64.6020908@hq.idt.net> Thomas, thanks for your tips. Another question: In the attached example when an undefined function is called (test case 0) the line number doesn't get written as you can see below. Is this expected? Serge $ erlc +'{parse_transform, smart_exceptions}' -pa ../ebin -o .. test.erl $ erl Erlang (BEAM) emulator version 5.4.12 [source] [hipe] [threads:0] Eshell V5.4.12 (abort with ^G) 1> test:test(). {0, {'EXIT',{undef,[{abcd,f,[]}, {test,'-test/0-lc$^0/1-0-',1}, {erl_eval,do_apply,5}, {shell,exprs,6}, {shell,eval_loop,3}]}}} {1,{'EXIT',{{test,t,1},{line,12},{error,1}}}} {2, {'EXIT',{{test,t,1}, {line,16}, {badarith,[{test,t,1}, {test,'-test/0-lc$^0/1-0-',1}, {test,'-test/0-lc$^0/1-0-',1}, {erl_eval,do_apply,5}, {shell,exprs,6}, {shell,eval_loop,3}]}, '/', [100,0]}}} {3,{'EXIT',{{test,t,1},{line,20},match,[{error,abc}]}}} {4,{error,test}} {5, {'EXIT',{{test,t,1}, {line,26}, {bad_error,[{test,t,1}, {test,'-test/0-lc$^0/1-0-',1}, {test,'-test/0-lc$^0/1-0-',1}, {erl_eval,do_apply,5}, {shell,exprs,6}, {shell,eval_loop,3}]}, {erlang,error}, [bad_error]}}} {6, {'EXIT',{{{test,t,1},{line,29},bad_fault}, [{test,t,1}, {test,'-test/0-lc$^0/1-0-',1}, {test,'-test/0-lc$^0/1-0-',1}, {erl_eval,do_apply,5}, {shell,exprs,6}, {shell,eval_loop,3}]}}} {7, {'EXIT',{{test,t,1}, {line,32}, {badarg,[{test,t,1}, {test,'-test/0-lc$^0/1-0-',1}, {test,'-test/0-lc$^0/1-0-',1}, {erl_eval,do_apply,5}, {shell,exprs,6}, {shell,eval_loop,3}]}, '!', [pid,test]}}} {8,{'EXIT',{{test,t,1},{line,7},function_clause,"\b"}}} Thomas Lindgren wrote: > > --- Serge Aleynikov wrote: > > >>1. Am I correct to say that when using smart >>exceptions, the beam files >>in jungerl/lib/smart_exceptions/ebin are only needed >>at compile-time and >>not at run-time? > > > With the "normal" use of smart exceptions, you only > need the transform at compile time. > > (Theoretically, you can also make it plug in other > stuff to do at an error, but this hasn't really been > used.) > > >>2. Is there a way to use smart_exceptions in order >>to include >>function/line info in the uncaught throw exception? > > > I'm not convinced that's a good default behaviour, > since the thrown value is sometimes used as a valid > return value. Changing a thrown value may then lead to > amusing runtime problems as pattern matches fail. > > If you still want to do it, hack the code so that > throw/1 is rewritten like exit/1. Eh, jungerl seems to > be very slow, so I can't show you directly ... but it > should basically be a matter of copy+paste+modify a > line where a call to exit(Rsn) is detected and > rewriting the copied line to do the same with throw. > > Note that the code does this rewrite in two ways, > depending on a compile-time flag. It's pretty > straightforward, just be sure that you are getting > your modified version. > > PS. OTP, not that I know if you're thinking about it, > but if you want to introduce line numbers etc. in > exceptions, please mail me before you start hacking. > There are a couple of things to think about, and do, > to get it right, or, at least, to get it much more > right than smart_exceptions. > > Best, > Thomas > > > __________________________________________________ > Do You Yahoo!? > Tired of spam? Yahoo! Mail has the best spam protection around > http://mail.yahoo.com > -- Serge Aleynikov R&D Telecom, IDT Corp. Tel: (973) 438-3436 Fax: (973) 438-1464 serge@REDACTED -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: test.erl URL: From thomasl_erlang@REDACTED Tue Mar 7 17:09:43 2006 From: thomasl_erlang@REDACTED (Thomas Lindgren) Date: Tue, 7 Mar 2006 08:09:43 -0800 (PST) Subject: Erlang & Hyperthreading In-Reply-To: <78568af10603012222v87a1b30u1dfc05e4c4695663@mail.gmail.com> Message-ID: <20060307160943.13579.qmail@web34409.mail.mud.yahoo.com> --- Ryan Rawson wrote: > In my circumstance, I run a mnesia database on every > node. Each node > answers questions from its local database. So > running N nodes on a > N-CPU/SMP system ends up with N copies of the > database on 1 machine. This is one case where using multiple nodes easily gets complicated or expensive. > That isn't the end of the world, since practically > any Unix/Linux > application on a 32 machine can't use more than 1.5 > GB RAM, but the > issue I'd be worried about is the additional > communications overhead. OK, this again depends on your architecture. In some cases, a given job can be broken down into pieces that can be passed around the nodes with relative ease. In other cases, e.g., e-commerce or so I'm told, you might be able to spread/scale the data and database over many nodes. In yet other cases, well ... bummer, man. It might also be worth measuring the actual communications overheads for your specific case. While waiting for the multithreaded VM, hey? :-) Best, Thomas __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From thomasl_erlang@REDACTED Tue Mar 7 17:27:31 2006 From: thomasl_erlang@REDACTED (Thomas Lindgren) Date: Tue, 7 Mar 2006 08:27:31 -0800 (PST) Subject: smart exceptions In-Reply-To: <440DAD64.6020908@hq.idt.net> Message-ID: <20060307162731.30374.qmail@web34402.mail.mud.yahoo.com> --- Serge Aleynikov wrote: > Thomas, thanks for your tips. > > Another question: In the attached example when an > undefined function is > called (test case 0) the line number doesn't get > written as you can see > below. Is this expected? Yes. There are some cases that can't be (cheaply) caught by smart_exceptions. At compile-time, we don't know if the function is supposed to be defined or not, and wrapping a catch around every function call to handle if it's undefined seems like overkill. You will see the same thing when calling bad funs, I believe. And calling code that wasn't compiler with smart exceptions will throw dumb old exceptions. All of these appear because this sort of checking seemed too expensive. Oh yes, there is another case where you will get dumb exceptions: when a binary expression fails (e.g., A = {foo}, <>). This one is something that should be fixed, but I've put it off. Finally, there is a fundamental weakness: smart_exceptions do not handle expressions with exported variables well. This is a thorny issue, but if erlc reports that variables are mysteriously undefined, that may be the cause. (And as you can see, smart exceptions is really functionality that sits better integrated in the VM :-) If you have any usage/features feedback, send me a mail. Best, Thomas __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From rickard.s.green@REDACTED Tue Mar 7 17:30:40 2006 From: rickard.s.green@REDACTED (Rickard Green) Date: Tue, 07 Mar 2006 17:30:40 +0100 Subject: Message passing benchmark on smp emulator Message-ID: A non-text attachment was scrubbed... Name: not available Type: multipart/mixed Size: 1 bytes Desc: not available URL: From rickard.s.green@REDACTED Tue Mar 7 17:52:26 2006 From: rickard.s.green@REDACTED (Rickard Green) Date: Tue, 07 Mar 2006 17:52:26 +0100 Subject: [Fwd: Message passing benchmark on smp emulator] Message-ID: <440DBA4A.60700@ericsson.com> Trying again... -------- Original Message -------- Subject: Message passing benchmark on smp emulator Date: Tue, 07 Mar 2006 17:30:40 +0100 From: Rickard Green Newsgroups: erix.mailing-list.erlang-questions The message passing benchmark used in estone (and bstone) isn't very well suited for the smp emulator since it sends a message in a ring (more or less only 1 process runnable all the time). In order to be able to take advantage of an smp emulator I wrote another message passing benchmark. In this benchmark all participating processes sends a message to all processes and waits for replies on the sent messages. I've attached the benchmark. Run like this: big:bang(NoOfParticipatingProcesses). I ran the benchmark on a machine with two hyperthreaded Xeon 2.40GHz processors. big:bang(50): * r10b completed after about 0.014 seconds. * p11b with 4 schedulers completed after about 0.018 seconds. big:bang(100): * r10b completed after about 0.088 seconds. * p11b with 4 schedulers completed after about 0.088 seconds. big:bang(300): * r10b completed after about 2.6 seconds. * p11b with 4 schedulers completed after about 1.0 seconds. big:bang(500): * r10b completed after about 10.7 seconds. * p11b with 4 schedulers completed after about 3.5 seconds. big:bang(600): * r10b completed after about 18.0 seconds. * p11b with 4 schedulers completed after about 5.8 seconds. big:bang(700): * r10b completed after about 27.0 seconds. * p11b with 4 schedulers completed after about 9.3 seconds. Quite a good result I guess. Note that this is a special case and these kind of speedups are not expected for an arbitrary Erlang program. If you want to try yourself download a P11B snapshot at: http://www.erlang.org/download/snapshots/ remember to enable smp support: ./configure --enable-smp-support --disable-lock-checking You can change the number of schedulers used by passing the +S command line argument to erl or by calling: erlang:system_flag(schedulers, NoOfSchedulers) -> {ok|PosixError, CurrentNo, OldNo} /Rickard Green, Erlang/OTP -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: big.erl URL: From nils.muellner@REDACTED Tue Mar 7 18:02:25 2006 From: nils.muellner@REDACTED (=?ISO-8859-15?Q?Nils_M=FCllner?=) Date: Tue, 07 Mar 2006 18:02:25 +0100 Subject: crypto:aes problem Message-ID: <440DBCA1.9050103@heh.uni-oldenburg.de> hi, i wrote a little "benchmark" in erlang, using the crypto:aes_cbc. but it happens, that crypto:aes_cbc produces 2 different results with the same input and the benchmark fails. i wanted to compare the power of distributed computing against one single pc. even the power of multi-cpu-systems could been measured by starting multiple client-nodes. can somebody help please ;-) the code is available at http://www.informatik.uni-oldenburg.de/~phoenix/db3.erl for use please take the following steps: 1. ensure that your erlang dist supports aes and you have the .erlang.cookie set 2. edit the values "talona" in the code to the name of your machine, talona was mine... (should be 4 times) 3. start 3 consoles, go to the dir containing the db3.erl 4. run erl -sname server on the first, erl -sname keyserver on 2nd and erl -sname client on 3rd 5. run c(db3). on all of them 6. on server@REDACTED run db3:start_server(). on client@REDACTED run db3:start_client(). and on keyserver@REDACTED run db3:start_keyserver(). 7. finally, to start the whole mess type db3:start(). the used key is set to Key = <<16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#0F,16#0F>>, so the server is supposed to stop after 16*256 steps. the output proofs that the function gets the right values (if you interrupt by pressing ctrl+c while the client tests the key), but crypto:aes is producing the wrong result. i have looked over the program the last 2 days and i cant find a mistake in my program. can anyone test this and tell me wether im wrong or there is a bug? nils p.s. this is not intended to break any cipher!! its just for benchmarking and comparing systems! i know it costs a lot of computing time to generate the whole output. but the output is just for debugging. the final version will just consist of a result output. From nils.muellner@REDACTED Tue Mar 7 18:25:55 2006 From: nils.muellner@REDACTED (=?ISO-8859-15?Q?Nils_M=FCllner?=) Date: Tue, 07 Mar 2006 18:25:55 +0100 Subject: crypto:aes problem In-Reply-To: <440DBCA1.9050103@heh.uni-oldenburg.de> References: <440DBCA1.9050103@heh.uni-oldenburg.de> Message-ID: <440DC223.4090200@heh.uni-oldenburg.de> there seems to be a problem with the server. correct url: http://www.informatik.uni-oldenburg.de/~phoenix/db3.zip nils > hi, > i wrote a little "benchmark" in erlang, using the crypto:aes_cbc. but > it happens, that crypto:aes_cbc produces 2 different results with the > same input and the benchmark fails. i wanted to compare the power of > distributed computing against one single pc. even the power of > multi-cpu-systems could been measured by starting multiple > client-nodes. can somebody help please ;-) > the code is available at > http://www.informatik.uni-oldenburg.de/~phoenix/db3.erl > > for use please take the following steps: > 1. ensure that your erlang dist supports aes and you have the > .erlang.cookie set > 2. edit the values "talona" in the code to the name of your machine, > talona was mine... (should be 4 times) > 3. start 3 consoles, go to the dir containing the db3.erl > 4. run > erl -sname server on the first, > erl -sname keyserver on 2nd and > erl -sname client on 3rd > 5. run c(db3). on all of them > 6. on server@REDACTED run > db3:start_server(). on client@REDACTED run > db3:start_client(). and on keyserver@REDACTED run > db3:start_keyserver(). > 7. finally, to start the whole mess type db3:start(). > > the used key is set to > Key = > <<16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#0F,16#0F>>, > > so the server is supposed to stop after 16*256 steps. the output > proofs that the function gets the right values (if you interrupt by > pressing ctrl+c while the client tests the key), but crypto:aes is > producing the wrong result. i have looked over the program the last 2 > days and i cant find a mistake in my program. can anyone test this and > tell me wether im wrong or there is a bug? > > nils > > p.s. > this is not intended to break any cipher!! its just for benchmarking > and comparing systems! i know it costs a lot of computing time to > generate the whole output. but the output is just for debugging. the > final version will just consist of a result output. > From ulf.wiger@REDACTED Tue Mar 7 18:58:46 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Tue, 7 Mar 2006 18:58:46 +0100 Subject: updates to new rdbms Message-ID: Christian S wrote: > blog about it and get it on planeterlang.org ? That I could do, and it's great to see how many people do it. As a matter of fact, I have had a blog project on the backburner, but thought I'd work on it whenever I had some spare time (which hasn't happened in a loong time.) I invested a few hours in it now. Anyway, here it is: http://ulf.wiger.net/weblog/ There is a post on 'rdbms' there. You can register and post comments (and please do!). There's an RSS feed as well. I also put up copies of the code there, including a more accessible link to the manual (http://ulf.wiger.net/rdbms/doc/rdbms.html) /Ulf W -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok@REDACTED Tue Mar 7 22:27:29 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Wed, 8 Mar 2006 10:27:29 +1300 (NZDT) Subject: optimization of list comprehensions Message-ID: <200603072127.k27LRT3b304006@atlas.otago.ac.nz> "Ulf Wiger \(AL/EAB\)" provided another example of uses of folding: expand(Tops, TIds) -> lists:foldr(<{hairy function}>, [], Tops). which is a foldr, not a foldl, and again, has a simple variable as the list. expand(Tops, TIds) -> lists:foldl(<{hairy function}>, [], Tops). mk_tops(TIds1, TIds2, Dir, SId, Acc) -> lists:foldl(fun(TId1, Acc1) -> lists:foldl(<{hairy function}>, Acc1, TIds2), Acc, TIds1). where again we find simple variables for the list arguments. Let's look at the example in a little more detail. mk_tops(TIds1, TIds2, Dir, SId) when is_list(TIds1), is_list(TIds2) -> [#ctxtTop{frTId = TId1, toTId = TId2, dir = Dir, strId = SId} || TId1 <- TIds1, TId2 <- TIds2, TId1 =/= TId2]. expand(Tops, TIds) -> lists:foldr( fun(#ctxtTop{frTId = '*', toTId = '*', dir = Dir, strId = SId}, Acc) -> Acc ++ mk_tops(TIds, TIds, Dir, Sid); (#ctxtTop{frTId = T1, toTId = '*', dir = Dir, strId = SId}, Acc) -> Acc ++ mk_tops([T1], TIds, Dir, SId); (#ctxtTop{frTId = '*', toTId = T1, dir = Dir, strId = SId}, Acc) -> Acc ++ mk_tops(TIds, [T1], Dir, SId) end, [], Tops). There is a common pattern here which we may be able to factor out: Acc ++ mk_tops(?1, ?2, Dir, SId) expand(Tops, TIds) -> lists:foldr(fun(#ctxtTop{frTID=F, toTId=T, dir=Dir, strID=SId}, Acc) -> {TIds1,TIds2} = case {F,T} of {'*','*'} -> {TIds,TIds} ; {_, '*'} -> {[F], TIds} ; {'*',_ } -> {TIds,[T] } end, Acc ++ mk_tops(TIds1, TIds2, Dir, SId) end, [], Tops). Now we can consider expanding mk_tops in-line. We see that the is_list(TIDs1), is_list(TIds2) guard is satisfied when is_list(TIds), so we can push that back into expand. expand(Tops, TIds) when is_list(TIds) -> lists:foldr(fun(#ctxtTop{frTID=F, toTId=T, dir=Dir, strId=SId}, Acc) -> {TIds1,TIds2} = case {F,T} of {'*','*'} -> {TIds,TIds} ; {_, '*'} -> {[F], TIds} ; {'*',_ } -> {TIds,[T] } end, Acc ++ [#ctxtTop{frTid=TId1, toTId=TId2, dir=Dir, strId=SId} || TId1 <- TIds1, TId2 < TId2, TId2 =/= TId1] end, [], Tops). Let's suppose the order of the elements doesn't matter very much, then we can replace this by a single list comprehension: expand(Tops, TIds) when is_list(TIds) -> [ #ctxtTop{frID=TId1, toTId=TId2, dir=Dir, strID=SId} || #ctxtTop{frId=F, toTId=T, dir=Dir, strId=SId} <- Tops, {TIds1,TIds2} = case {F, T } of {'*','*'} -> {TIds,TIds} ; {_, '*'} -> {[F], TIds} ; {'*',_ } -> {TIds,[T] } end, TId1 <- TIds1, TId2 <- TIds2, TId2 =/= Tid1]. If the order does matter, we might need an extra reverse. But we don't appear to need any folds, unless I have misunderstood the example. From ok@REDACTED Tue Mar 7 22:41:43 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Wed, 8 Mar 2006 10:41:43 +1300 (NZDT) Subject: optimization of list comprehensions Message-ID: <200603072141.k27Lfh8V304413@atlas.otago.ac.nz> Mats Cronqvist wrote: foo(Xs) -> foo(Xs, root). foo([], A) -> A; foo([X|T], A) when is_atom(X) -> foo(T, {A,X}); foo([_|T], A) -> foo(T, A). would look a lot better like this (using Per Gustafsson's syntax; http://www.erlang.org/ml-archive/erlang-questions/200603/msg00034.html) foo(Xs) -> ({A,X} || X<-Xs, A<--root, is_atom(X)). Er, weren't round parentheses the proposed syntax for "evaluate these expressions like a list comprehension but throw the results away and just return 'ok'"? If the normal order of binding in Erlang list comprehensions were followed in this syntax, A would be rebound to 'root' every time. <-- and <- are MUCH too similar; I defy anyone in the heat of battle (manager has locked you into the building to finish code which MUST ship tomorrow morning, yes that really happened to me) to see the difference. I am also extremely unconvinced by this example since it is a very very strange thing to want to compute. If such a weird data structure -type weird(X) -> root | {weird(X), X}. were really needed, we'd *already* have functions list_to_weird([]) -> root. list_to_weird([X|Xs]) -> {list_to_weird(Xs), X}. weird_to_list(root) -> []. weird_to_list({Xs,X}) -> [X|weird_to_list(Xs)]. and the example would *really* be foo(List) -> list_to_weird([Atom || Atom <- list, is_atom(Atom)]). i reject the argument "there is noo need for new syntax since one can already do this"; foo(Xs) -> lists:foldl(fun(X,A) when is_atom(X) -> {A,X}; (_,A) -> A end, root,Xs). firstly; it is actually a lot more difficult to read (although the LOC is the same). But that is NOT the way you would do it using existing syntax. You would do it like this: foo(List) -> list_to_weird([Atom || Atom <- list, is_atom(Atom)]). and that's the clearest of all. secondly; for whatever reason normal industry programmers(*) will rarely if ever use lists:fold* (or indeed anything that involves funs). they didn't use lists:map or lists:foreach either, but they do use list comprehensions. If you have evidence to back this up, it's publishable. Please publish it. Some sort of survey analysing what kind of programmers use what kind of language features would be most illuminating. But if I understand you, you are claiming that normal industry programmers are incompetent, ineducable, or both. From ulf.wiger@REDACTED Tue Mar 7 22:49:47 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Tue, 7 Mar 2006 22:49:47 +0100 Subject: optimization of list comprehensions Message-ID: Richard O'Keefe wrote: > expand(Tops, TIds) when is_list(TIds) -> > #ctxtTop{frID=TId1, toTId=TId2, dir=Dir, strID=SId} > || #ctxtTop{frId=F, toTId=T, dir=Dir, strId=SId} <- Tops, > {TIds1,TIds2} = case {F, T } > of {'*','*'} -> {TIds,TIds} > ; {_, '*'} -> {[F], TIds} > ; {'*',_ } -> {TIds,[T] } > end, > TId1 <- TIds1, > TId2 <- TIds2, > TId2 =/= Tid1]. >If the order does matter, we might need an extra > reverse. But we don't appear to need any folds, > unless I have misunderstood the example. I'm sure you're right. The function was a suggestion for a re-write of something that was _considerably_ longer (about 50 LOC, using neither folds, maps nor lcs, but with lots of inner-loop appends). I had presented three different pieces of code and simplified them in steps, to illustrate how one can chip away at hairy code and gradually see the patterns appear. Obviously, I quit iterating before reaching fixpoint. BR, /Ulf W From ulf.wiger@REDACTED Tue Mar 7 23:02:18 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Tue, 7 Mar 2006 23:02:18 +0100 Subject: optimization of list comprehensions Message-ID: Richard O'Keefe wrote: > secondly; for whatever reason normal > industry programmers(*) > will rarely if ever use lists:fold* (or > indeed anything that involves funs). > they didn't use lists:map or lists:foreach > either, but they do use list comprehensions. > > If you have evidence to back this up, it's publishable. > Please publish it. Some sort of survey analysing what kind > of programmers use what kind of language features would be > most illuminating. Hmm... I just posted the following on comp.lang.functional: ====================================== In our production code, I just did a search on calls to the 'lists' module (a collection of polymorphic iterator functions and other generic operations on lists) and the pattern 'fun(', signifying the definition of a higher-order function in Erlang. I focused on one subsystem - one that has fairly recently written code. In 154K lines of code, a simple 'grep' revealed 937 calls (6%) to 'lists:...' and 452 (3%) declarations of higher- order functions(*). Perhaps even more interesting, there were 198 instances (1.3%) of list comprehensions (only 200 generators, though, so nearly all lcs are simple). (*) Granted, many calls to the lists module will invclude a fun() (e.g. foldl, map, foreach, ...) I think it's safe to say that even "average" industrial programmers rather quickly learn to exploit the virtues of higher-order functions and iterators. ====================================== BR, Ulf W From ok@REDACTED Tue Mar 7 23:17:55 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Wed, 8 Mar 2006 11:17:55 +1300 (NZDT) Subject: optimization of list comprehensions Message-ID: <200603072217.k27MHtgR305561@atlas.otago.ac.nz> "Ulf Wiger \(AL/EAB\)" wrote: ... and the pattern 'fun(', signifying the definition of a ... This may well yield an underestimate. There's a lot of Erlang code that puts spaces between 'fun' and '(', sometimes even a newline! I think it's safe to say that even "average" industrial programmers rather quickly learn to exploit the virtues of higher-order functions and iterators. So yes, that seems to be a safe conclusion. From fritchie@REDACTED Wed Mar 8 00:35:32 2006 From: fritchie@REDACTED (Scott Lystig Fritchie) Date: Tue, 07 Mar 2006 17:35:32 -0600 Subject: Erlang & Hyperthreading In-Reply-To: Message of "Tue, 07 Mar 2006 08:09:43 PST." <20060307160943.13579.qmail@web34409.mail.mud.yahoo.com> Message-ID: <200603072335.k27NZWgJ004731@snookles.snookles.com> >>>>> "tl" == Thomas Lindgren writes: tl> --- Ryan Rawson wrote: >> In my circumstance, I run a mnesia database on every node. Each >> node answers questions from its local database. So running N nodes >> on a N-CPU/SMP system ends up with N copies of the database on 1 >> machine. tl> This is one case where using multiple nodes easily gets tl> complicated or expensive. Thomas hinted at this ... depending on the nature of your application, you could break the Mnesia nodes into two groups, A & B: A nodes which have disc_copies and/or disc_only_copies of the app's important tables and do not run application code B nodes which have ram_copies of the Mnesia schema only and *do* run application code A + B = N Then you have only A copies of the database. If your application is fairly CPU-intensive, then you may be able to get away with A = 1, since the A node(s) "only" has to act like a "database server".(*) I've done this sort of thing where the A & B nodes are on physically separate machines. It worked pretty well for my (still under development) application. YMMV. -Scott (*) If Mnesia could be called a "client/server database" in this configuration. The terms "data-ful" and "data-less" are more accurate, I suppose, but I don't see those terms used to describe databases. {shrug} From ryanobjc@REDACTED Wed Mar 8 08:23:18 2006 From: ryanobjc@REDACTED (Ryan Rawson) Date: Tue, 7 Mar 2006 23:23:18 -0800 Subject: httpd module vs inets {packet,http} Message-ID: <78568af10603072323m5e009bd6x4f1107f0a37e5340@mail.gmail.com> Hi all, I read the 'fast httpd' howto from trapexit.org, and I also looked at the httpd module in OTS. I'm a little confused - it seems to me that the httpd howto doesn't use the httpd module, it uses a undocumented feature of the packet driver (which may in turn internally use the httpd module). While the httpd documentation seems to describe callbacks but its kind of thinly documented. Not the end of the world, but I'm confused - what is the recommended thing to do here? What do other people do? Say for example, creating a REST "web service" ? Thanks in advance for any tips and hints. -ryan From matthias@REDACTED Wed Mar 8 09:06:12 2006 From: matthias@REDACTED (Matthias Lang) Date: Wed, 8 Mar 2006 09:06:12 +0100 Subject: httpd module vs inets {packet,http} In-Reply-To: <78568af10603072323m5e009bd6x4f1107f0a37e5340@mail.gmail.com> References: <78568af10603072323m5e009bd6x4f1107f0a37e5340@mail.gmail.com> Message-ID: <17422.36980.296819.754843@antilipe.corelatus.se> A "REST" web service seems to be some sort philosophy for how to design a service. For the purpose of "how do I do this in Erlang", I think it just boils down to "how do I serve dynamically generated web pages". If it's not, then my answer probably misses the point. So: if you just want to serve dynamic web pages, you can choose between two ready-made web servers: OTP's httpd and YAWS. Both web servers are used in the real world. They have different peformance tradeoffs and different approaches to interfacing with 'your' application. YAWS seems to be more popular for new applications. If you can't make up your mind about which one to use, flip a coin. The OTP httpd interface you probably want to use is 'mod_esi': http://www.erlang.org/doc/doc-5.4.12/lib/inets-4.6.2/doc/html/mod_esi.html Writing code to use it is straightforward, the hard part is all the fudging around with httpd.conf. If, on the other hand, you want to write your own web server 'from scratch', then the undocumented http mode of the packet driver is useful. That's what the 'howto' you found is about. Matthias -------------------- Ryan Rawson writes: > Hi all, > > I read the 'fast httpd' howto from trapexit.org, and I also looked at > the httpd module in OTS. I'm a little confused - it seems to me that > the httpd howto doesn't use the httpd module, it uses a undocumented > feature of the packet driver (which may in turn internally use the > httpd module). While the httpd documentation seems to describe > callbacks but its kind of thinly documented. Not the end of the > world, but I'm confused - what is the recommended thing to do here? > What do other people do? Say for example, creating a REST "web > service" ? > > Thanks in advance for any tips and hints. > -ryan From ryanobjc@REDACTED Wed Mar 8 09:11:38 2006 From: ryanobjc@REDACTED (Ryan Rawson) Date: Wed, 8 Mar 2006 00:11:38 -0800 Subject: httpd module vs inets {packet,http} In-Reply-To: <17422.36980.296819.754843@antilipe.corelatus.se> References: <78568af10603072323m5e009bd6x4f1107f0a37e5340@mail.gmail.com> <17422.36980.296819.754843@antilipe.corelatus.se> Message-ID: <78568af10603080011u1ee06e1fkcbbbc74ffc213cff@mail.gmail.com> I didn't like the howto - it seemed like my code would be littered with http protocol droppings, even though the actual framing is taken care of by the http packet mode. I think for me, yaws seems like this whole big thing, and kind of bothers me - enough to look at alternatives first. Thanks for the mod_esi pointer. -ryan On 3/8/06, Matthias Lang wrote: > > A "REST" web service seems to be some sort philosophy for how to > design a service. For the purpose of "how do I do this in Erlang", I > think it just boils down to "how do I serve dynamically generated web > pages". If it's not, then my answer probably misses the point. > > So: if you just want to serve dynamic web pages, you can choose > between two ready-made web servers: OTP's httpd and YAWS. Both web > servers are used in the real world. They have different peformance > tradeoffs and different approaches to interfacing with 'your' > application. YAWS seems to be more popular for new applications. If > you can't make up your mind about which one to use, flip a coin. > > The OTP httpd interface you probably want to use is 'mod_esi': > > http://www.erlang.org/doc/doc-5.4.12/lib/inets-4.6.2/doc/html/mod_esi.html > > Writing code to use it is straightforward, the hard part is all the > fudging around with httpd.conf. > > If, on the other hand, you want to write your own web server 'from > scratch', then the undocumented http mode of the packet driver is > useful. That's what the 'howto' you found is about. > > Matthias > > -------------------- > > Ryan Rawson writes: > > Hi all, > > > > I read the 'fast httpd' howto from trapexit.org, and I also looked at > > the httpd module in OTS. I'm a little confused - it seems to me that > > the httpd howto doesn't use the httpd module, it uses a undocumented > > feature of the packet driver (which may in turn internally use the > > httpd module). While the httpd documentation seems to describe > > callbacks but its kind of thinly documented. Not the end of the > > world, but I'm confused - what is the recommended thing to do here? > > What do other people do? Say for example, creating a REST "web > > service" ? > > > > Thanks in advance for any tips and hints. > > -ryan > From tobbe@REDACTED Wed Mar 8 10:05:26 2006 From: tobbe@REDACTED (Torbjorn Tornkvist) Date: Wed, 08 Mar 2006 10:05:26 +0100 Subject: httpd module vs inets {packet,http} In-Reply-To: <78568af10603080011u1ee06e1fkcbbbc74ffc213cff@mail.gmail.com> References: <78568af10603072323m5e009bd6x4f1107f0a37e5340@mail.gmail.com> <17422.36980.296819.754843@antilipe.corelatus.se> <78568af10603080011u1ee06e1fkcbbbc74ffc213cff@mail.gmail.com> Message-ID: Ryan Rawson wrote: > I didn't like the howto - it seemed like my code would be littered > with http protocol droppings, even though the actual framing is taken > care of by the http packet mode. > > I think for me, yaws seems like this whole big thing, and kind of > bothers me - enough to look at alternatives first. Don't hesitate! Yaws is dead-easy to use. See also: http://www.trapexit.org/docs/howto/howto_setup_yaws.html For a HowTo on how to quickly get running. Cheers, Tobbe From sean.hinde@REDACTED Wed Mar 8 10:38:36 2006 From: sean.hinde@REDACTED (Sean Hinde) Date: Wed, 8 Mar 2006 09:38:36 +0000 Subject: httpd module vs inets {packet,http} In-Reply-To: <78568af10603080011u1ee06e1fkcbbbc74ffc213cff@mail.gmail.com> References: <78568af10603072323m5e009bd6x4f1107f0a37e5340@mail.gmail.com> <17422.36980.296819.754843@antilipe.corelatus.se> <78568af10603080011u1ee06e1fkcbbbc74ffc213cff@mail.gmail.com> Message-ID: <7A9B007F-BAB5-4F8C-8ED3-EF8D716D1F53@gmail.com> That is interesting feedback on the tutorial, thanks. If I can find time to wrestle again with the sometimes flaky update mechanism on the Trapexit HOWTO pages I will update the tutorial to point out more clearly where the API to user code appears, as well as checkin a couple of bugfixes from folks who have studied it deeply and posted on the list. FWIW I would recommend Yaws most highly for new applications. It is not nearly as heavyweight as it first appears, and is very nice for writing this sort of application. Sean On 8 Mar 2006, at 08:11, Ryan Rawson wrote: > I didn't like the howto - it seemed like my code would be littered > with http protocol droppings, even though the actual framing is taken > care of by the http packet mode. > > I think for me, yaws seems like this whole big thing, and kind of > bothers me - enough to look at alternatives first. > > Thanks for the mod_esi pointer. > > -ryan > > On 3/8/06, Matthias Lang wrote: >> >> A "REST" web service seems to be some sort philosophy for how to >> design a service. For the purpose of "how do I do this in Erlang", I >> think it just boils down to "how do I serve dynamically generated web >> pages". If it's not, then my answer probably misses the point. >> >> So: if you just want to serve dynamic web pages, you can choose >> between two ready-made web servers: OTP's httpd and YAWS. Both web >> servers are used in the real world. They have different peformance >> tradeoffs and different approaches to interfacing with 'your' >> application. YAWS seems to be more popular for new applications. If >> you can't make up your mind about which one to use, flip a coin. >> >> The OTP httpd interface you probably want to use is 'mod_esi': >> >> http://www.erlang.org/doc/doc-5.4.12/lib/inets-4.6.2/doc/html/ >> mod_esi.html >> >> Writing code to use it is straightforward, the hard part is all the >> fudging around with httpd.conf. >> >> If, on the other hand, you want to write your own web server 'from >> scratch', then the undocumented http mode of the packet driver is >> useful. That's what the 'howto' you found is about. >> >> Matthias >> >> -------------------- >> >> Ryan Rawson writes: >>> Hi all, >>> >>> I read the 'fast httpd' howto from trapexit.org, and I also >>> looked at >>> the httpd module in OTS. I'm a little confused - it seems to me >>> that >>> the httpd howto doesn't use the httpd module, it uses a undocumented >>> feature of the packet driver (which may in turn internally use the >>> httpd module). While the httpd documentation seems to describe >>> callbacks but its kind of thinly documented. Not the end of the >>> world, but I'm confused - what is the recommended thing to do here? >>> What do other people do? Say for example, creating a REST "web >>> service" ? >>> >>> Thanks in advance for any tips and hints. >>> -ryan >> From mats.cronqvist@REDACTED Wed Mar 8 11:07:00 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Wed, 08 Mar 2006 11:07:00 +0100 Subject: optimization of list comprehensions In-Reply-To: <200603072141.k27Lfh8V304413@atlas.otago.ac.nz> References: <200603072141.k27Lfh8V304413@atlas.otago.ac.nz> Message-ID: <440EACC4.9040006@ericsson.com> Richard A. O'Keefe wrote: > Mats Cronqvist wrote: > foo(Xs) -> foo(Xs, root). > > foo([], A) -> A; > foo([X|T], A) when is_atom(X) -> foo(T, {A,X}); > foo([_|T], A) -> foo(T, A). > > would look a lot better like this (using Per Gustafsson's syntax; > http://www.erlang.org/ml-archive/erlang-questions/200603/msg00034.html) > > foo(Xs) -> > ({A,X} || X<-Xs, A<--root, is_atom(X)). > > Er, weren't round parentheses the proposed syntax for > "evaluate these expressions like a list comprehension but throw > the results away and just return 'ok'"? i see you didn't bother clicking on that link. [deleted critique of the example syntax] i think it has been made clear several times that i have no strong opinion about what the notation should look like. i'll let the pro's deal with that. > I am also extremely unconvinced by this example since it is a very very > strange thing to want to compute. [deleted critique of silly example] the example was just that; an example, and kept deliberately simple at that. the point of the example was that fold-like comprehensions would change the way people write code. the new notation would be used instead of recursive functions with guards and accumulators. just like list comprehensions are now used much more frequently than one would have estimated by looking at the (pre-lc) use of lists:map and lists:foreach. > secondly; for whatever reason normal industry programmers(*) > will rarely if ever use lists:fold* (or indeed anything that involves > funs). they didn't use lists:map or lists:foreach either, but they do > use list comprehensions. > > If you have evidence to back this up, it's publishable. > Please publish it. Some sort of survey analysing what kind of > programmers use what kind of language features would be most illuminating. alas, i am not a researcher (any longer). but maybe i can find the time to grep around a bit in the sources. > But if I understand you, you are claiming that normal industry programmers > are incompetent, ineducable, or both. no, you do most certainly not understand me. and it actually seems you're making a concious effort to misunderstand. just for the record; i don't think a reluctance to use funs indicates incompetence. i am aware that my proposed syntax is not optimal, and possibly even "quite astonishingly BAD". i remain convinced that fold-like comprehensions would be an excellent thing, just like map-like comprehensions were. mats From ryanobjc@REDACTED Wed Mar 8 11:24:47 2006 From: ryanobjc@REDACTED (Ryan Rawson) Date: Wed, 8 Mar 2006 02:24:47 -0800 Subject: httpd module vs inets {packet,http} In-Reply-To: <7A9B007F-BAB5-4F8C-8ED3-EF8D716D1F53@gmail.com> References: <78568af10603072323m5e009bd6x4f1107f0a37e5340@mail.gmail.com> <17422.36980.296819.754843@antilipe.corelatus.se> <78568af10603080011u1ee06e1fkcbbbc74ffc213cff@mail.gmail.com> <7A9B007F-BAB5-4F8C-8ED3-EF8D716D1F53@gmail.com> Message-ID: <78568af10603080224g29f61eccwd0c7a2b40b762d55@mail.gmail.com> Would you mind elaborating a bit for me? Basically my application is going to automatically discover its fellow node hosts via LDAP (we have a mechanism already), use mnesia to maintain state across them, and each machine is going to answer HTTP service queries. That is my current design - one major advantage is mnesia is a great cross-node database, great for high read/low write services that need to be able to answer questions quickly but don't do nearly as many writes. This particular feature of my app made me think Erlang would be well suited. -ryan On 3/8/06, Sean Hinde wrote: > That is interesting feedback on the tutorial, thanks. > > If I can find time to wrestle again with the sometimes flaky update > mechanism on the Trapexit HOWTO pages I will update the tutorial to > point out more clearly where the API to user code appears, as well as > checkin a couple of bugfixes from folks who have studied it deeply > and posted on the list. > > FWIW I would recommend Yaws most highly for new applications. It is > not nearly as heavyweight as it first appears, and is very nice for > writing this sort of application. > > Sean > > On 8 Mar 2006, at 08:11, Ryan Rawson wrote: > > > I didn't like the howto - it seemed like my code would be littered > > with http protocol droppings, even though the actual framing is taken > > care of by the http packet mode. > > > > I think for me, yaws seems like this whole big thing, and kind of > > bothers me - enough to look at alternatives first. > > > > Thanks for the mod_esi pointer. > > > > -ryan > > > > On 3/8/06, Matthias Lang wrote: > >> > >> A "REST" web service seems to be some sort philosophy for how to > >> design a service. For the purpose of "how do I do this in Erlang", I > >> think it just boils down to "how do I serve dynamically generated web > >> pages". If it's not, then my answer probably misses the point. > >> > >> So: if you just want to serve dynamic web pages, you can choose > >> between two ready-made web servers: OTP's httpd and YAWS. Both web > >> servers are used in the real world. They have different peformance > >> tradeoffs and different approaches to interfacing with 'your' > >> application. YAWS seems to be more popular for new applications. If > >> you can't make up your mind about which one to use, flip a coin. > >> > >> The OTP httpd interface you probably want to use is 'mod_esi': > >> > >> http://www.erlang.org/doc/doc-5.4.12/lib/inets-4.6.2/doc/html/ > >> mod_esi.html > >> > >> Writing code to use it is straightforward, the hard part is all the > >> fudging around with httpd.conf. > >> > >> If, on the other hand, you want to write your own web server 'from > >> scratch', then the undocumented http mode of the packet driver is > >> useful. That's what the 'howto' you found is about. > >> > >> Matthias > >> > >> -------------------- > >> > >> Ryan Rawson writes: > >>> Hi all, > >>> > >>> I read the 'fast httpd' howto from trapexit.org, and I also > >>> looked at > >>> the httpd module in OTS. I'm a little confused - it seems to me > >>> that > >>> the httpd howto doesn't use the httpd module, it uses a undocumented > >>> feature of the packet driver (which may in turn internally use the > >>> httpd module). While the httpd documentation seems to describe > >>> callbacks but its kind of thinly documented. Not the end of the > >>> world, but I'm confused - what is the recommended thing to do here? > >>> What do other people do? Say for example, creating a REST "web > >>> service" ? > >>> > >>> Thanks in advance for any tips and hints. > >>> -ryan > >> > > From bengt.kleberg@REDACTED Wed Mar 8 12:07:16 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Wed, 08 Mar 2006 12:07:16 +0100 Subject: looking for =?ISO-8859-1?Q?H=E5kan_Stenholm?= Message-ID: <440EBAE4.3060405@ericsson.com> greetings, the old address (mbox304.swipnet.se) is no longer woking. does anybody (esp H?kan :-) know of a new one? bengt From bjorn@REDACTED Wed Mar 8 13:11:41 2006 From: bjorn@REDACTED (Bjorn Gustavsson) Date: 08 Mar 2006 13:11:41 +0100 Subject: Erlang/OTP R10B-10 has been released Message-ID: Bug fix release : otp_src_R10B-10 Build date : 2006-03-08 This is bug fix release 10 for the R10B release. You can download the full source distribution from http://www.erlang.org/download/otp_src_R10B-10.tar.gz http://www.erlang.org/download/otp_src_R10B-10.readme Note: To unpack the TAR archive you need a GNU TAR compatible program. For instance, on MacOS X before 10.3 you must use the 'gnutar' command; you can't use the 'tar' command or StuffIt to unpack the sources. For installation instructions please read the README that is part of the distribution. The Windows binary distribution can be downloaded from http://www.erlang.org/download/otp_win32_R10B-10.exe On-line documentation can be found at http://www.erlang.org/doc.html. You can also download the complete HTML documentation or the Unix manual files http://www.erlang.org/download/otp_doc_html_R10B-10.tar.gz http://www.erlang.org/download/otp_doc_man_R10B-10.tar.gz We also want to thank those that sent us patches, suggestions and bug reports, The OTP Team -- Bj?rn Gustavsson, Erlang/OTP, Ericsson AB From ke.han@REDACTED Wed Mar 8 14:55:35 2006 From: ke.han@REDACTED (Jon Hancock) Date: Wed, 8 Mar 2006 21:55:35 +0800 Subject: httpd module vs inets {packet,http} In-Reply-To: <78568af10603080224g29f61eccwd0c7a2b40b762d55@mail.gmail.com> References: <78568af10603072323m5e009bd6x4f1107f0a37e5340@mail.gmail.com> <17422.36980.296819.754843@antilipe.corelatus.se> <78568af10603080011u1ee06e1fkcbbbc74ffc213cff@mail.gmail.com> <7A9B007F-BAB5-4F8C-8ED3-EF8D716D1F53@gmail.com> <78568af10603080224g29f61eccwd0c7a2b40b762d55@mail.gmail.com> Message-ID: My understanding is that REST is all about returning XML documents. So a gross overview of the REST transaction life-cycle is: 1 - fetch incoming requests (I think REST requests come wrapped in an HTTP format?) 2 - decode the request's URI and parameters 3 - statically or dynamically construct the resource to return 4 - return the resource as an XML doc Yaws will be a champ at this sort of thing. The HowTo you reference is enlightening and well written. The HowTo shows how to write your own socket server. You would only want to do this for something very specialized. For example, if you wanted to build a new server to handle some custom protocol you've written you would start with this HowTo structure and build from there. I am building an app that has just this requirement and I found the HowTo to be a solid starting point. But for your needs, go with Yaws. On Mar 8, 2006, at 6:24 PM, Ryan Rawson wrote: > Would you mind elaborating a bit for me? > > Basically my application is going to automatically discover its fellow > node hosts via LDAP (we have a mechanism already), use mnesia to > maintain state across them, and each machine is going to answer HTTP > service queries. That is my current design - one major advantage is > mnesia is a great cross-node database, great for high read/low write > services that need to be able to answer questions quickly but don't do > nearly as many writes. This particular feature of my app made me > think Erlang would be well suited. > > -ryan > > > On 3/8/06, Sean Hinde wrote: >> That is interesting feedback on the tutorial, thanks. >> >> If I can find time to wrestle again with the sometimes flaky update >> mechanism on the Trapexit HOWTO pages I will update the tutorial to >> point out more clearly where the API to user code appears, as well as >> checkin a couple of bugfixes from folks who have studied it deeply >> and posted on the list. >> >> FWIW I would recommend Yaws most highly for new applications. It is >> not nearly as heavyweight as it first appears, and is very nice for >> writing this sort of application. >> >> Sean >> >> On 8 Mar 2006, at 08:11, Ryan Rawson wrote: >> >>> I didn't like the howto - it seemed like my code would be littered >>> with http protocol droppings, even though the actual framing is >>> taken >>> care of by the http packet mode. >>> >>> I think for me, yaws seems like this whole big thing, and kind of >>> bothers me - enough to look at alternatives first. >>> >>> Thanks for the mod_esi pointer. >>> >>> -ryan >>> >>> On 3/8/06, Matthias Lang wrote: >>>> >>>> A "REST" web service seems to be some sort philosophy for how to >>>> design a service. For the purpose of "how do I do this in >>>> Erlang", I >>>> think it just boils down to "how do I serve dynamically >>>> generated web >>>> pages". If it's not, then my answer probably misses the point. >>>> >>>> So: if you just want to serve dynamic web pages, you can choose >>>> between two ready-made web servers: OTP's httpd and YAWS. Both web >>>> servers are used in the real world. They have different peformance >>>> tradeoffs and different approaches to interfacing with 'your' >>>> application. YAWS seems to be more popular for new applications. If >>>> you can't make up your mind about which one to use, flip a coin. >>>> >>>> The OTP httpd interface you probably want to use is 'mod_esi': >>>> >>>> http://www.erlang.org/doc/doc-5.4.12/lib/inets-4.6.2/doc/html/ >>>> mod_esi.html >>>> >>>> Writing code to use it is straightforward, the hard part is all the >>>> fudging around with httpd.conf. >>>> >>>> If, on the other hand, you want to write your own web server 'from >>>> scratch', then the undocumented http mode of the packet driver is >>>> useful. That's what the 'howto' you found is about. >>>> >>>> Matthias >>>> >>>> -------------------- >>>> >>>> Ryan Rawson writes: >>>>> Hi all, >>>>> >>>>> I read the 'fast httpd' howto from trapexit.org, and I also >>>>> looked at >>>>> the httpd module in OTS. I'm a little confused - it seems to me >>>>> that >>>>> the httpd howto doesn't use the httpd module, it uses a >>>>> undocumented >>>>> feature of the packet driver (which may in turn internally use the >>>>> httpd module). While the httpd documentation seems to describe >>>>> callbacks but its kind of thinly documented. Not the end of the >>>>> world, but I'm confused - what is the recommended thing to do >>>>> here? >>>>> What do other people do? Say for example, creating a REST "web >>>>> service" ? >>>>> >>>>> Thanks in advance for any tips and hints. >>>>> -ryan >>>> >> >> From dsolaz@REDACTED Wed Mar 8 21:49:41 2006 From: dsolaz@REDACTED (Daniel Solaz) Date: Wed, 08 Mar 2006 21:49:41 +0100 Subject: R10B-10: bug in os_mon memsup.c? In-Reply-To: References: Message-ID: <440F4365.6060605@sistelcom.com> This is on IRIX 6.5 (unsupported by os_mon). Using GCC 3.3.something in 32-bit mode. gmake[4]: Entering directory `.../otp_src_R10B-10/lib/os_mon/c_src' gcc -c -o ../priv/obj/mips-sgi-irix6.5/memsup.o -g -O2 -I.../otp_src_R10B-10/erts/mips-sgi-irix6.5 -DHAVE_CONFIG_H memsup.c memsup.c: In function `get_basic_mem': memsup.c:230: error: `shiftleft' undeclared (first use in this function) memsup.c:230: error: (Each undeclared identifier is reported only once memsup.c:230: error: for each function it appears in.) gmake[4]: *** [../priv/obj/mips-sgi-irix6.5/memsup.o] Error 1 In prior versions 'shiftleft' used to be an argument to get_basic_mem(); then memsup compiled and ran just fine on IRIX but returned bogus memory values as expected. After the rewrite it seems it's been overlooked, since conditional compilation keeps this out of the way on os_mon-supported platforms. BTW I'm about to finish hacking os_mon so that it works on IRIX. Probably nobody cares, but I'd rather contribute my patches than maintain an always off-line, always outdated 'Erlang on IRIX' web page. Regards. -Daniel From richardc@REDACTED Wed Mar 8 22:00:21 2006 From: richardc@REDACTED (Richard Carlsson) Date: Wed, 08 Mar 2006 22:00:21 +0100 Subject: optimization of list comprehensions In-Reply-To: <200603070006.k27060ki296681@atlas.otago.ac.nz> References: <200603070006.k27060ki296681@atlas.otago.ac.nz> Message-ID: <440F45E5.7020400@it.uu.se> Richard A. O'Keefe wrote: > In fact, I spent about 20 minutes looking at foldl calls in the OTP > sources, [...] > Here's the only complex example I found in those 20 minutes: > > find_modules(P, [Path|Paths], Ext, S0) -> > case file:list_dir(filename:join(Path, P)) > of {ok, Fs} -> > S1 = lists:foldl( > fun(F, S) -> sets:add_element(filename:rootname(F, Ext), S) end, > S0, > [F || F <- Fs, filename:extension(F) =:= Ext]), > find_modules(P, Paths, Ext, S1) > ; _ -> > find_modules(P, Paths, Ext, S0) > end; > find_modules(_P, [], _Ext, S) -> > sets:to_list(S). I feel honored. :-) /Richard Carlsson From ryanobjc@REDACTED Wed Mar 8 22:13:15 2006 From: ryanobjc@REDACTED (Ryan Rawson) Date: Wed, 8 Mar 2006 13:13:15 -0800 Subject: httpd module vs inets {packet,http} In-Reply-To: References: <78568af10603072323m5e009bd6x4f1107f0a37e5340@mail.gmail.com> <17422.36980.296819.754843@antilipe.corelatus.se> <78568af10603080011u1ee06e1fkcbbbc74ffc213cff@mail.gmail.com> <7A9B007F-BAB5-4F8C-8ED3-EF8D716D1F53@gmail.com> <78568af10603080224g29f61eccwd0c7a2b40b762d55@mail.gmail.com> Message-ID: <78568af10603081313s2efebfc3iebfea2f4b1894da0@mail.gmail.com> Thanks for the advice guys. My understanding of REST is basically a rejection of SOAP - essentially instead of saying "use POST and send this XML request document and get a reply", instead, embed the request in the URI/URL. Use what we already know works - GET for read-requests and POST for write-requests. -ryan On 3/8/06, Jon Hancock wrote: > My understanding is that REST is all about returning XML documents. > So a gross overview of the REST transaction life-cycle is: > 1 - fetch incoming requests (I think REST requests come wrapped in > an HTTP format?) > 2 - decode the request's URI and parameters > 3 - statically or dynamically construct the resource to return > 4 - return the resource as an XML doc > > Yaws will be a champ at this sort of thing. > The HowTo you reference is enlightening and well written. The HowTo > shows how to write your own socket server. You would only want to > do this for something very specialized. For example, if you wanted > to build a new server to handle some custom protocol you've written > you would start with this HowTo structure and build from there. I am > building an app that has just this requirement and I found the HowTo > to be a solid starting point. > But for your needs, go with Yaws. > > On Mar 8, 2006, at 6:24 PM, Ryan Rawson wrote: > > > Would you mind elaborating a bit for me? > > > > Basically my application is going to automatically discover its fellow > > node hosts via LDAP (we have a mechanism already), use mnesia to > > maintain state across them, and each machine is going to answer HTTP > > service queries. That is my current design - one major advantage is > > mnesia is a great cross-node database, great for high read/low write > > services that need to be able to answer questions quickly but don't do > > nearly as many writes. This particular feature of my app made me > > think Erlang would be well suited. > > > > -ryan > > > > > > On 3/8/06, Sean Hinde wrote: > >> That is interesting feedback on the tutorial, thanks. > >> > >> If I can find time to wrestle again with the sometimes flaky update > >> mechanism on the Trapexit HOWTO pages I will update the tutorial to > >> point out more clearly where the API to user code appears, as well as > >> checkin a couple of bugfixes from folks who have studied it deeply > >> and posted on the list. > >> > >> FWIW I would recommend Yaws most highly for new applications. It is > >> not nearly as heavyweight as it first appears, and is very nice for > >> writing this sort of application. > >> > >> Sean > >> > >> On 8 Mar 2006, at 08:11, Ryan Rawson wrote: > >> > >>> I didn't like the howto - it seemed like my code would be littered > >>> with http protocol droppings, even though the actual framing is > >>> taken > >>> care of by the http packet mode. > >>> > >>> I think for me, yaws seems like this whole big thing, and kind of > >>> bothers me - enough to look at alternatives first. > >>> > >>> Thanks for the mod_esi pointer. > >>> > >>> -ryan > >>> > >>> On 3/8/06, Matthias Lang wrote: > >>>> > >>>> A "REST" web service seems to be some sort philosophy for how to > >>>> design a service. For the purpose of "how do I do this in > >>>> Erlang", I > >>>> think it just boils down to "how do I serve dynamically > >>>> generated web > >>>> pages". If it's not, then my answer probably misses the point. > >>>> > >>>> So: if you just want to serve dynamic web pages, you can choose > >>>> between two ready-made web servers: OTP's httpd and YAWS. Both web > >>>> servers are used in the real world. They have different peformance > >>>> tradeoffs and different approaches to interfacing with 'your' > >>>> application. YAWS seems to be more popular for new applications. If > >>>> you can't make up your mind about which one to use, flip a coin. > >>>> > >>>> The OTP httpd interface you probably want to use is 'mod_esi': > >>>> > >>>> http://www.erlang.org/doc/doc-5.4.12/lib/inets-4.6.2/doc/html/ > >>>> mod_esi.html > >>>> > >>>> Writing code to use it is straightforward, the hard part is all the > >>>> fudging around with httpd.conf. > >>>> > >>>> If, on the other hand, you want to write your own web server 'from > >>>> scratch', then the undocumented http mode of the packet driver is > >>>> useful. That's what the 'howto' you found is about. > >>>> > >>>> Matthias > >>>> > >>>> -------------------- > >>>> > >>>> Ryan Rawson writes: > >>>>> Hi all, > >>>>> > >>>>> I read the 'fast httpd' howto from trapexit.org, and I also > >>>>> looked at > >>>>> the httpd module in OTS. I'm a little confused - it seems to me > >>>>> that > >>>>> the httpd howto doesn't use the httpd module, it uses a > >>>>> undocumented > >>>>> feature of the packet driver (which may in turn internally use the > >>>>> httpd module). While the httpd documentation seems to describe > >>>>> callbacks but its kind of thinly documented. Not the end of the > >>>>> world, but I'm confused - what is the recommended thing to do > >>>>> here? > >>>>> What do other people do? Say for example, creating a REST "web > >>>>> service" ? > >>>>> > >>>>> Thanks in advance for any tips and hints. > >>>>> -ryan > >>>> > >> > >> > > From ok@REDACTED Thu Mar 9 02:06:01 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Thu, 9 Mar 2006 14:06:01 +1300 (NZDT) Subject: optimization of list comprehensions Message-ID: <200603090106.k29161Si311122@atlas.otago.ac.nz> Mats Cronqvist wrote: i see you didn't bother clicking on that link. It's not a matter of "didn't bother", it's a matter of CAN'T. Because I keep sensitive assignment and examination details on my machine, I need to use a secure Mail User Agent. The one I use *doesn't* have a GUI and *can't* follow links (so it can't follow links of malware either). To follow that link I'd have to switch over to another machine and type the link in by hand. Sprenger and Kramer (the authors of Malleus Malleficarum) got a lot of things wrong, but they got one thing right: when you quote, qoute enough so that your reader doesn't *have* to look up the volume. OK, so _that_ message used round parentheses for folding, but _other_ messages in this thread have used round parentheses for so-the-side-effects-and-throw-the-results-away. Let's look at the syntax again: (expr(X1,...,Xn,Y1,...,Ym) || PatX <- List, PatY <-- Acc0 => lists:foldl(fun(PatX, PatY) -> expr(X1,...,Xn,Y1,...,Ym) end, List, Acc0) So the whole difference between (f(X,Y) || X <- List, Y <- Init) % _ = [f(X,Y) || X <- List, Y <- Init] and (f(X,Y) || X <- List, Y <-- Init % foldl(fun(X,Y)->f(X,Y) end, List, Init) is that one of them has Y <- Init (one dash) and the other has Y <-- Init (two dashes). NOT a good notation. Also, not a completely defined notation. List comprehensions allow multiple generators, and also filters. - can you put Y <-- before X <- ? - why is the order of X <- and Y <-- the *opposite* of the order you would expect from list notation? - does this notation allow multiple generators? - does it allow multiple <-- accumulators? - does it allow filters? (In that message, the answer is - don't know, not stated - don't know, author didn't apparently realise it made a difference - no, author said not clear how to do this - no, a major weakness (you have to use tuples, which misses the point) - no, thought to be "trivial" but not actually done (You'll perceive from this that I _have_ fired up the other machine and _have_ retyped that URL. What a waste of my time.) We don't need incomplete "solutions". Turn the syntax inside out: ( Result_Expr where ( Pat1 = Init1 then Step1, ..., Patn = Initn then Stepn || generators and filters ) so the sum-a-list-of-pairs example from the cited message isn't the confusing ({X+XAcc,Y+YAcc} || {X,Y} <- List, {XAcc,YAcc} <-- {0,0}) but ({XSum,YSum} where XSum = 0 then XSum+X, YSum = 0 then YSum+Y || {X,Y} <- List ) where the existence of multiple accumulators means that even a fairly dumb compiler could avoid generating the intermediate tuples. Pat = Init then Pat could be written as Pat = Init, thus introducing a local binding, and if there are no Step expressions, there needn't be a || part either, so ({X,Y} where X = ..., Y = ...) would be the Erlang equivalent of a Haskell 'where', something that some Erlang programmers have wanted for years. the example was just that; an example, and kept deliberately simple at that. That simply won't _do_ as an argument. It was supposed to be a good enough example to make the point. In fact it supports the opposite point. So far, we don't have *any* realistic examples where a new notation would help. the point of the example was that fold-like comprehensions would change the way people write code. But that example did NOT demonstrate that point. What we need is *realistic* examples. Ideally, examples taken from actual working code. alas, i am not a researcher (any longer). but maybe i can find the time to grep around a bit in the sources. You don't have to be a "researcher" to publish.. > But if I understand you, you are claiming that normal industry programmers > are incompetent, ineducable, or both. no, you do most certainly not understand me. and it actually seems you're making a concious effort to misunderstand. I may not have understood *you*, but I certainly understood what you wrote. You made the (arguably defamatory) claim that "normal industry programmers" were unable or unwilling to use a fundamental aspect of the language appropriately. i don't think a reluctance to use funs indicates incompetence. If we were talking about C++ or Java, you could be right. But in a functional language? This is like claiming that a reluctance to use classes is compatible with competence in C++. i remain convinced that fold-like comprehensions would be an excellent thing, just like map-like comprehensions were. That's nice for you. How about trying to convince others? What we need are realistic (ideally _real_) examples of code which can be made a lot more readable by some good notation. The notation I've used in this message is beginning to seem plausible. I'm particularly pleased that it has a the ISWIM 'where' as a natural special case. But the fact that I've now found a notation for this job that I like does not mean that I think there is evidence that Erlang would be the better for including it. For that, we need evidence. From casper2000a@REDACTED Thu Mar 9 04:28:14 2006 From: casper2000a@REDACTED (Eranga Udesh) Date: Thu, 9 Mar 2006 09:28:14 +0600 Subject: updates to new rdbms In-Reply-To: Message-ID: <20060309032947.40535400084@mail.omnibis.com> No way. Erlang mailing list is a huge knowledge base. Though even myself doesn't do through all the mails published here, when I face some problems or when I need to find info, I also do a query on back published discussions. It's very important to have them at a centralized archive (mailing list), so that it's easy to search. Anyway as klacke mentioned, may be its good to have thread hiding feature in MUAs, so that anybody can hide/delete unnecessary thread for individual preferences. May be it's time to put a Feature request to Thunderbird, M$, etc. Cheers, - Eranga -----Original Message----- From: owner-erlang-questions@REDACTED [mailto:owner-erlang-questions@REDACTED] On Behalf Of Ulf Wiger (AL/EAB) Sent: Tuesday, March 07, 2006 8:03 PM To: Claes Wikstrom Cc: erlang-questions@REDACTED Subject: RE: updates to new rdbms If that's a widely held opinion, I can of course take the discussion elsewhere. (Or perhaps you didn't mean just this thread, since there have certainly been longer threads that may not have had universal appeal?) Except there isn't any obvious "elsewhere", since the trapexit forums are down. Creating different mailing lists on the jungerl sourceforge project would be one option. I welcome any suggestions/comments. /Ulf W > -----Original Message----- > From: Claes Wikstrom [mailto:klacke@REDACTED] > Sent: den 7 mars 2006 14:52 > To: Ulf Wiger (AL/EAB) > Cc: erlang-questions@REDACTED > Subject: Re: updates to new rdbms > > Ulf Wiger (AL/EAB) wrote: > > I've checked in a bug fix of rdbms_index, > > ..... > > Sometimes one wish that ones MUA had a feature whereby a mail > thread/topic could be marked as > > "never ever show me mail in this thread .. ever again" > > > /klacke > > > -- > Claes Wikstrom -- Caps lock is nowhere and > http://www.hyber.org -- everything is under control > cellphone: +46 70 2097763 > From ke.han@REDACTED Thu Mar 9 07:01:49 2006 From: ke.han@REDACTED (ke han) Date: Thu, 9 Mar 2006 14:01:49 +0800 Subject: httpd module vs inets {packet,http} In-Reply-To: <78568af10603081313s2efebfc3iebfea2f4b1894da0@mail.gmail.com> References: <78568af10603072323m5e009bd6x4f1107f0a37e5340@mail.gmail.com> <17422.36980.296819.754843@antilipe.corelatus.se> <78568af10603080011u1ee06e1fkcbbbc74ffc213cff@mail.gmail.com> <7A9B007F-BAB5-4F8C-8ED3-EF8D716D1F53@gmail.com> <78568af10603080224g29f61eccwd0c7a2b40b762d55@mail.gmail.com> <78568af10603081313s2efebfc3iebfea2f4b1894da0@mail.gmail.com> Message-ID: <772F524D-5924-42E2-9AD3-C583DB2E41CA@redstarling.com> Fair enough. Either way, your talking about needing a web server that handles http requests with agility and has all the bells and whisltes for production. You don't need to roll your own server in this case. As other have said, if your starting a new project use Yaws. If you have to use legacy code relying on inets, then its not a bad code base either. good luck, ke han On Mar 9, 2006, at 5:13 AM, Ryan Rawson wrote: > Thanks for the advice guys. > > My understanding of REST is basically a rejection of SOAP - > essentially instead of saying "use POST and send this XML request > document and get a reply", instead, embed the request in the URI/URL. > Use what we already know works - GET for read-requests and POST for > write-requests. > > -ryan > > On 3/8/06, Jon Hancock wrote: >> My understanding is that REST is all about returning XML documents. >> So a gross overview of the REST transaction life-cycle is: >> 1 - fetch incoming requests (I think REST requests come >> wrapped in >> an HTTP format?) >> 2 - decode the request's URI and parameters >> 3 - statically or dynamically construct the resource to >> return >> 4 - return the resource as an XML doc >> >> Yaws will be a champ at this sort of thing. >> The HowTo you reference is enlightening and well written. The HowTo >> shows how to write your own socket server. You would only want to >> do this for something very specialized. For example, if you wanted >> to build a new server to handle some custom protocol you've written >> you would start with this HowTo structure and build from there. I am >> building an app that has just this requirement and I found the HowTo >> to be a solid starting point. >> But for your needs, go with Yaws. >> >> On Mar 8, 2006, at 6:24 PM, Ryan Rawson wrote: >> >>> Would you mind elaborating a bit for me? >>> >>> Basically my application is going to automatically discover its >>> fellow >>> node hosts via LDAP (we have a mechanism already), use mnesia to >>> maintain state across them, and each machine is going to answer HTTP >>> service queries. That is my current design - one major advantage is >>> mnesia is a great cross-node database, great for high read/low write >>> services that need to be able to answer questions quickly but >>> don't do >>> nearly as many writes. This particular feature of my app made me >>> think Erlang would be well suited. >>> >>> -ryan >>> >>> >>> On 3/8/06, Sean Hinde wrote: >>>> That is interesting feedback on the tutorial, thanks. >>>> >>>> If I can find time to wrestle again with the sometimes flaky update >>>> mechanism on the Trapexit HOWTO pages I will update the tutorial to >>>> point out more clearly where the API to user code appears, as >>>> well as >>>> checkin a couple of bugfixes from folks who have studied it deeply >>>> and posted on the list. >>>> >>>> FWIW I would recommend Yaws most highly for new applications. It is >>>> not nearly as heavyweight as it first appears, and is very nice for >>>> writing this sort of application. >>>> >>>> Sean >>>> >>>> On 8 Mar 2006, at 08:11, Ryan Rawson wrote: >>>> >>>>> I didn't like the howto - it seemed like my code would be littered >>>>> with http protocol droppings, even though the actual framing is >>>>> taken >>>>> care of by the http packet mode. >>>>> >>>>> I think for me, yaws seems like this whole big thing, and kind of >>>>> bothers me - enough to look at alternatives first. >>>>> >>>>> Thanks for the mod_esi pointer. >>>>> >>>>> -ryan >>>>> >>>>> On 3/8/06, Matthias Lang wrote: >>>>>> >>>>>> A "REST" web service seems to be some sort philosophy for how to >>>>>> design a service. For the purpose of "how do I do this in >>>>>> Erlang", I >>>>>> think it just boils down to "how do I serve dynamically >>>>>> generated web >>>>>> pages". If it's not, then my answer probably misses the point. >>>>>> >>>>>> So: if you just want to serve dynamic web pages, you can choose >>>>>> between two ready-made web servers: OTP's httpd and YAWS. Both >>>>>> web >>>>>> servers are used in the real world. They have different >>>>>> peformance >>>>>> tradeoffs and different approaches to interfacing with 'your' >>>>>> application. YAWS seems to be more popular for new >>>>>> applications. If >>>>>> you can't make up your mind about which one to use, flip a coin. >>>>>> >>>>>> The OTP httpd interface you probably want to use is 'mod_esi': >>>>>> >>>>>> http://www.erlang.org/doc/doc-5.4.12/lib/inets-4.6.2/doc/html/ >>>>>> mod_esi.html >>>>>> >>>>>> Writing code to use it is straightforward, the hard part is >>>>>> all the >>>>>> fudging around with httpd.conf. >>>>>> >>>>>> If, on the other hand, you want to write your own web server >>>>>> 'from >>>>>> scratch', then the undocumented http mode of the packet driver is >>>>>> useful. That's what the 'howto' you found is about. >>>>>> >>>>>> Matthias >>>>>> >>>>>> -------------------- >>>>>> >>>>>> Ryan Rawson writes: >>>>>>> Hi all, >>>>>>> >>>>>>> I read the 'fast httpd' howto from trapexit.org, and I also >>>>>>> looked at >>>>>>> the httpd module in OTS. I'm a little confused - it seems to me >>>>>>> that >>>>>>> the httpd howto doesn't use the httpd module, it uses a >>>>>>> undocumented >>>>>>> feature of the packet driver (which may in turn internally >>>>>>> use the >>>>>>> httpd module). While the httpd documentation seems to describe >>>>>>> callbacks but its kind of thinly documented. Not the end of the >>>>>>> world, but I'm confused - what is the recommended thing to do >>>>>>> here? >>>>>>> What do other people do? Say for example, creating a REST "web >>>>>>> service" ? >>>>>>> >>>>>>> Thanks in advance for any tips and hints. >>>>>>> -ryan >>>>>> >>>> >>>> >> >> From pete-expires-20060401@REDACTED Wed Mar 8 22:24:12 2006 From: pete-expires-20060401@REDACTED (Pete Kazmier) Date: Wed, 08 Mar 2006 16:24:12 -0500 Subject: Parse Transform Question Message-ID: <87r75cmxz7.fsf@coco.kazmier.com> Can I use a parse transform to take code that looks like this: test() -> cond predicate1() -> do1(); predicate2() -> do2(); predicate3() -> do3() end. And transform it to this: test() -> case predicate1() of true -> do1(); false -> case predicate2() of true -> do2(); false -> case predicate3() of true -> do3(); false -> false end end end. While writing some code today, I ended up with something that looks like this: test() -> if predicate1() -> do1(); predicate2() -> do2(); predicate3() -> do3() end. I then discovered that one can only use valid guard expressions within an 'if'. Converting the above to a bunch of nested 'case' statements was not very appealing to me as it looks ugly. Could I use a parse transform to give me the syntax that I am seeking? What I have ended doing for the moment is: cond([]) -> false; cond([{Predicate, Func} | Rest]) -> case Predicate() of true -> Func(); false -> cond(Rest) end. So I end up with: test() -> cond([{fun predicate1/0, fun do1/0}, {fun predicate2/0, fun do2/0}, {fun predicate3/0, fun do3/0}]). Of course I am simplifying my code greatly. My original code fragment that started this whole quest is the following (which doesn't work): process_directory(Base, [], Ancestors) -> []; process_directory(Base, [File|Rest], Ancestors) -> Path = filename:join(Base, File), if filelib:is_file(Path) andalso lists:suffix(".txt", File) -> Story = read_story(Path, lists:reverse(Ancestors)), [Story|process_directory(Base, Rest, Ancestors)]; filelib:is_dir(Path) -> {ok, Contents} = file:list_dir(Path), lists:append(process_directory(Base, Rest, Ancestors), process_directory(Base, Contents, [File|Ancestors])); true -> process_directory(Base, Rest, Ancestors) end. Thanks, Pete From mats.cronqvist@REDACTED Thu Mar 9 09:14:47 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Thu, 09 Mar 2006 09:14:47 +0100 Subject: optimization of list comprehensions In-Reply-To: <200603090106.k29161Si311122@atlas.otago.ac.nz> References: <200603090106.k29161Si311122@atlas.otago.ac.nz> Message-ID: <440FE3F7.4020709@ericsson.com> Richard A. O'Keefe wrote: [deleted bits about richard o'keefe's mail reader] [deleted bits about how silly my example was] > alas, i am not a researcher (any longer). but maybe i can find > the time to grep around a bit in the sources. > > You don't have to be a "researcher" to publish.. no, but to find the time to write a paper. maybe we can co-author? > > But if I understand you, you are claiming that normal industry programmers > > are incompetent, ineducable, or both. > > no, you do most certainly not understand me. and it actually > seems you're making a concious effort to misunderstand. > > I may not have understood *you*, but I certainly understood what you > wrote. You made the (arguably defamatory) claim that "normal industry > programmers" were unable or unwilling to use a fundamental aspect of > the language appropriately. *I* said the "normal industry programmer" prefers to write code (that works just fine) without using funs. *You* claim that means they're incomptetent. > i don't think a reluctance to use funs indicates incompetence. > > If we were talking about C++ or Java, you could be right. > But in a functional language? This is like claiming that a reluctance > to use classes is compatible with competence in C++. now, suppose i was a project manager, and some guy managed to write some excellent C++ code that did just what i wanted, but he didn't use classes. the code's not very snazzy, but fully readable. if i understand you correctly, you consider that guy incomptetent, and i don't. it is perfectly possible to be highly competent in the domain, and write perfectly fine Erlang, *AND* never use funs (except when using mnesia of course). > That's nice for you. How about trying to convince others? yeah, that's a good idea! but maybe trying to do so will cause me to be flamed by some wierdo, waste a ton of my time, and bore everyone on the mailing list to tears? so i'll give it a miss. mats From vlad_dumitrescu@REDACTED Thu Mar 9 10:55:34 2006 From: vlad_dumitrescu@REDACTED (Vlad Dumitrescu) Date: Thu, 9 Mar 2006 10:55:34 +0100 Subject: Parse Transform Question In-Reply-To: <87r75cmxz7.fsf@coco.kazmier.com> Message-ID: Hi, > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Pete Kazmier > Can I use a parse transform to take code that looks like this: > > test() -> > cond > predicate1() -> do1(); > predicate2() -> do2(); > predicate3() -> do3() > end. No, you can't. This is because in order to be able to apply a parse treansform, the original code must be syntactically valid Erlang code, and the cond construct isn't. Also, cond is now a reserved keyword, and a cond construct is coming soon in a release near you. You could write something like test() -> control:'cond'( fun(predicate1()) -> do1(); (predicate2()) -> do2(); (predicate3()) -> do3() end ). and use a parse transform for this one. Best regards, Vlad From ulf.wiger@REDACTED Thu Mar 9 13:41:35 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Thu, 9 Mar 2006 13:41:35 +0100 Subject: optimization of list comprehensions Message-ID: > I think it's safe to say that even "average" industrial > programmers rather quickly learn to exploit the virtues > of higher-order functions and iterators. > > So yes, that seems to be a safe conclusion. Actually, no, because my percentages were off by an order of magnitude. Recalculated, the numbers are decidedly less supportive of the conclusion. /Ulf W From mats.cronqvist@REDACTED Thu Mar 9 15:01:10 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Thu, 09 Mar 2006 15:01:10 +0100 Subject: optimization of list comprehensions In-Reply-To: References: Message-ID: <44103526.6050603@ericsson.com> Richard A. O'Keefe wrote: > If you have evidence to back this up, it's publishable. > Please publish it. Some sort of survey analysing what kind of > programmers use what kind of language features would be most illuminating. Ulf Wiger (AL/EAB) wrote: > I think it's safe to say that even "average" industrial > programmers rather quickly learn to exploit the virtues > of higher-order functions and iterators. not to be outdone by wiger (and since o'keefe said please), i ran a few greps on 1,748,162 lines of erlang sources (comments included). i estimate that some 3-400 people have contributed to the code sample. i counted how many times each of the patterns below appeared per module. "[ (]fun[ (]" "lists:fold" "lists:map" "lists:foreach" never 64% 89% 84% 85% <3 times 81% 97% 94% 94% 36% of the modules defined at least one fun. however, 15% of all modules had at least one call to mnesia:transaction. so i feel pretty confident stating that ~80% of all modules have no (non-mnesia related) funs. another observation is that although there were ~2 fun definition per module, 4% of the modules accounts for 50% of the fun definitions. one possible interpretation is that most of this code was produced by programmers that rarely use funs if they don't have to. obviously, the methodology is much too weak to *prove* this. i'm not sure it can be proven, since there's no way to establish who wrote what. most modules have been touched by at least a handful of people, and everyone has worked on more than one module. note also that most of this code sample has been used for years in a very challenging environment, and is part of one of the most stable products in its market. i does seem safe to say that one can write well working Erlang code while rarely, if ever, using funs (excluding mnesia use, as always). mats From joe.armstrong@REDACTED Thu Mar 9 15:22:33 2006 From: joe.armstrong@REDACTED (Joe Armstrong (AL/EAB)) Date: Thu, 9 Mar 2006 15:22:33 +0100 Subject: Multi-core Erlang Message-ID: Hello list, Following Rickards post I have now got my hands on a dual core dual processor system - (ie 4 CPUs) and we have been able to reproduce Richards results. I have posted a longer article (with images) on my newly started blog http://www.erlang-stuff.net/blog/ This shows a 3.6 factor speedup for a message passing benchmark and 1.8 for an application program (a SIP stack) - these are good results. The second program in particular was not written to avoid sequential bottlenecks. Despite this it ran 1.8 times faster on a 4 CPU system than on a one CPU system. The nice thing about these results were that the benchmark ran almost 4 times faster - this benchmark just did spawns message passing and computations and had no sequential bottlenecks - pure code made from lots of small processes seems to speed up nicely on a multi-core system. Well done Rickard So how do you make stuff that goes fast? - go parallel Cheers /Joe > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Rickard Green > Sent: den 7 mars 2006 17:52 > To: erlang-questions@REDACTED > Subject: [Fwd: Message passing benchmark on smp emulator] > > Trying again... > > -------- Original Message -------- > Subject: Message passing benchmark on smp emulator > Date: Tue, 07 Mar 2006 17:30:40 +0100 > From: Rickard Green > Newsgroups: erix.mailing-list.erlang-questions > > The message passing benchmark used in estone (and bstone) > isn't very well suited for the smp emulator since it sends a > message in a ring (more or less only 1 process runnable all the time). > > In order to be able to take advantage of an smp emulator I > wrote another message passing benchmark. In this benchmark > all participating processes sends a message to all processes > and waits for replies on the sent messages. > > I've attached the benchmark. Run like this: > big:bang(NoOfParticipatingProcesses). > > I ran the benchmark on a machine with two hyperthreaded Xeon > 2.40GHz processors. > > big:bang(50): > * r10b completed after about 0.014 seconds. > * p11b with 4 schedulers completed after about 0.018 seconds. > > big:bang(100): > * r10b completed after about 0.088 seconds. > * p11b with 4 schedulers completed after about 0.088 seconds. > > big:bang(300): > * r10b completed after about 2.6 seconds. > * p11b with 4 schedulers completed after about 1.0 seconds. > > big:bang(500): > * r10b completed after about 10.7 seconds. > * p11b with 4 schedulers completed after about 3.5 seconds. > > big:bang(600): > * r10b completed after about 18.0 seconds. > * p11b with 4 schedulers completed after about 5.8 seconds. > > big:bang(700): > * r10b completed after about 27.0 seconds. > * p11b with 4 schedulers completed after about 9.3 seconds. > > Quite a good result I guess. > > Note that this is a special case and these kind of speedups > are not expected for an arbitrary Erlang program. > > If you want to try yourself download a P11B snapshot at: > http://www.erlang.org/download/snapshots/ > remember to enable smp support: > ./configure --enable-smp-support --disable-lock-checking > > You can change the number of schedulers used by passing the > +S command line argument to erl or by calling: > erlang:system_flag(schedulers, NoOfSchedulers) -> > {ok|PosixError, CurrentNo, OldNo} > > /Rickard Green, Erlang/OTP > > > From ulf.wiger@REDACTED Thu Mar 9 15:45:01 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Thu, 9 Mar 2006 15:45:01 +0100 Subject: optimization of list comprehensions Message-ID: In the face of Mats's convincing evidence and my own inability to calculate percentages, I must concede the point, at least as regards AXD 301 & derivatives. One should perhaps note that, at least for programmers reared in the AXD 301 environment, funs were initially slow and had a tendency to break during upgrade. Also, using list comprehensions would be about twice as slow as hand-coded iterator functions. These problems have long since been fixed, but perceptions have a tendency to linger, especially in a conservative setting, like doing maintenance on a "5-nines" system. Regards, Ulf W > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Mats Cronqvist > Sent: den 9 mars 2006 15:01 > To: erlang-questions@REDACTED > Subject: Re: optimization of list comprehensions > > Richard A. O'Keefe wrote: > > > If you have evidence to back this up, it's publishable. > > Please publish it. Some sort of survey analysing what > kind of > programmers use what kind of language features > would be most illuminating. > > Ulf Wiger (AL/EAB) wrote: > > > I think it's safe to say that even "average" industrial programmers > > rather quickly learn to exploit the virtues of higher-order > functions > > and iterators. > > not to be outdone by wiger (and since o'keefe said > please), i ran a few greps on 1,748,162 lines of erlang > sources (comments included). i estimate that some 3-400 > people have contributed to the code sample. > i counted how many times each of the patterns below > appeared per module. > > "[ (]fun[ (]" "lists:fold" "lists:map" "lists:foreach" > never 64% 89% 84% 85% > <3 times 81% 97% 94% 94% > > 36% of the modules defined at least one fun. however, 15% > of all modules had at least one call to mnesia:transaction. > so i feel pretty confident stating that > ~80% of all modules have no (non-mnesia related) funs. > another observation is that although there were ~2 fun > definition per module, 4% of the modules accounts for 50% of > the fun definitions. > one possible interpretation is that most of this code was > produced by programmers that rarely use funs if they don't > have to. obviously, the methodology is much too weak to > *prove* this. i'm not sure it can be proven, since there's no > way to establish who wrote what. most modules have been > touched by at least a handful of people, and everyone has > worked on more than one module. > note also that most of this code sample has been used for > years in a very challenging environment, and is part of one > of the most stable products in its market. > i does seem safe to say that one can write well working > Erlang code while rarely, if ever, using funs (excluding > mnesia use, as always). > > mats > From launoja@REDACTED Thu Mar 9 21:39:37 2006 From: launoja@REDACTED (Jani Launonen) Date: Thu, 09 Mar 2006 22:39:37 +0200 Subject: Multi-core Erlang In-Reply-To: References: Message-ID: Hello all, I did too some testing with otp_src_P11B_2006-02-26. The platform was Power Mac with 2xG5 running Mac OS X Tiger. Unfortunately the Mnesia bench (in lib/mnesia/examples/bench/ -directory) didn't show that good scaling with MT-EVM. I started single node with +S 2 -switch and run load generators and server on this node. That gave something between ~2800-2700 transactions/s. When running 2 single threaded nodes so that one runs load generators and the other is server node I got ~2000 tps. When both generators and server ran on single threaded node the result was ~3600 tps. That was somekind of a surprise. I did some heavy modification on replica and fragment parameters (I think I tried to use 1 replica & 1 fragment), but I lost access to the Mac so I cannot repeat them here. I was hoping that somebody more knowledgeable with Mnesia (and has access to mt-able platform) could try to find out, if one could get better results from MT-EVM. Running two naive fibonacci generators simultaneusly on node started with +S 2 the scalability was near perfect, of course. There were something interesting when quitting the EVT: "jabba@REDACTED:~/programming/erlang/koulutus/erlang> ~/tmp/otp_src_P11B_2006-02-26/bin/erl Erlang (BEAM) emulator version 5.5 [source] [smp:1] [async-threads:0] [hipe] Eshell V5.5 (abort with ^G) 1> c(oma). {ok,oma} 2> oma:fibonacci(40). 102334155 3> q(). ok 4> (no error logger present) error: "#Port<0.1>: io_thr_waker: Input driver gone away without deselecting!\n" jabba@REDACTED:~/programming/erlang/koulutus/erlang>" The fibonacci is the naive implementation if anybody wants to recreate above: fibonacci(1) -> 1; fibonacci(2) -> 1; fibonacci(N) -> fibonacci(N - 1) + fibonacci(N - 2). Cheers, Jani L. Joe Armstrong (AL/EAB) kirjoitti Thu, 09 Mar 2006 16:22:33 +0200: > Hello list, > > Following Rickards post I have now got my hands on a dual core dual > processor > system - (ie 4 CPUs) and we have been able to reproduce Richards > results. > > I have posted a longer article (with images) on my newly started > blog > > http://www.erlang-stuff.net/blog/ > > This shows a 3.6 factor speedup for a message passing benchmark and > 1.8 for an application program (a SIP stack) - these are good results. > The > second program in particular was not written to avoid sequential > bottlenecks. > > Despite this it ran 1.8 times faster on a 4 CPU system than on a > one CPU system. > > The nice thing about these results were that the benchmark ran > almost 4 times faster > - this benchmark just did spawns message passing and computations and > had no sequential > bottlenecks - pure code made from lots of small processes seems to speed > up nicely on > a multi-core system. > > Well done Rickard > > So how do you make stuff that goes fast? - go parallel > > Cheers > > > /Joe > > > > >> -----Original Message----- >> From: owner-erlang-questions@REDACTED >> [mailto:owner-erlang-questions@REDACTED] On Behalf Of Rickard Green >> Sent: den 7 mars 2006 17:52 >> To: erlang-questions@REDACTED >> Subject: [Fwd: Message passing benchmark on smp emulator] >> >> Trying again... >> >> -------- Original Message -------- >> Subject: Message passing benchmark on smp emulator >> Date: Tue, 07 Mar 2006 17:30:40 +0100 >> From: Rickard Green >> Newsgroups: erix.mailing-list.erlang-questions >> >> The message passing benchmark used in estone (and bstone) >> isn't very well suited for the smp emulator since it sends a >> message in a ring (more or less only 1 process runnable all the time). >> >> In order to be able to take advantage of an smp emulator I >> wrote another message passing benchmark. In this benchmark >> all participating processes sends a message to all processes >> and waits for replies on the sent messages. >> >> I've attached the benchmark. Run like this: >> big:bang(NoOfParticipatingProcesses). >> >> I ran the benchmark on a machine with two hyperthreaded Xeon >> 2.40GHz processors. >> >> big:bang(50): >> * r10b completed after about 0.014 seconds. >> * p11b with 4 schedulers completed after about 0.018 seconds. >> >> big:bang(100): >> * r10b completed after about 0.088 seconds. >> * p11b with 4 schedulers completed after about 0.088 seconds. >> >> big:bang(300): >> * r10b completed after about 2.6 seconds. >> * p11b with 4 schedulers completed after about 1.0 seconds. >> >> big:bang(500): >> * r10b completed after about 10.7 seconds. >> * p11b with 4 schedulers completed after about 3.5 seconds. >> >> big:bang(600): >> * r10b completed after about 18.0 seconds. >> * p11b with 4 schedulers completed after about 5.8 seconds. >> >> big:bang(700): >> * r10b completed after about 27.0 seconds. >> * p11b with 4 schedulers completed after about 9.3 seconds. >> >> Quite a good result I guess. >> >> Note that this is a special case and these kind of speedups >> are not expected for an arbitrary Erlang program. >> >> If you want to try yourself download a P11B snapshot at: >> http://www.erlang.org/download/snapshots/ >> remember to enable smp support: >> ./configure --enable-smp-support --disable-lock-checking >> >> You can change the number of schedulers used by passing the >> +S command line argument to erl or by calling: >> erlang:system_flag(schedulers, NoOfSchedulers) -> >> {ok|PosixError, CurrentNo, OldNo} >> >> /Rickard Green, Erlang/OTP >> >> >> -- Jani Launonen From robert.virding@REDACTED Thu Mar 9 22:39:32 2006 From: robert.virding@REDACTED (Robert Virding) Date: Thu, 09 Mar 2006 22:39:32 +0100 Subject: optimization of list comprehensions In-Reply-To: <44103526.6050603@ericsson.com> References: <44103526.6050603@ericsson.com> Message-ID: <4410A094.2030907@telia.com> You will probably find an even grater usage if you start searching more seriously. I, for one, usually do an import on 'lists' so I don't have to write the lists: prefix all the time. Robert Mats Cronqvist skrev: > Richard A. O'Keefe wrote: > > > If you have evidence to back this up, it's publishable. > > Please publish it. Some sort of survey analysing what kind of > > programmers use what kind of language features would be most > illuminating. > > Ulf Wiger (AL/EAB) wrote: > >> I think it's safe to say that even "average" industrial programmers >> rather quickly learn to exploit the virtues >> of higher-order functions and iterators. > > not to be outdone by wiger (and since o'keefe said please), i ran a > few greps on 1,748,162 lines of erlang sources (comments included). i > estimate that some 3-400 people have contributed to the code sample. > i counted how many times each of the patterns below appeared per module. > > "[ (]fun[ (]" "lists:fold" "lists:map" "lists:foreach" > never 64% 89% 84% 85% > <3 times 81% 97% 94% 94% > > 36% of the modules defined at least one fun. however, 15% of all > modules had at least one call to mnesia:transaction. so i feel pretty > confident stating that ~80% of all modules have no (non-mnesia related) > funs. > another observation is that although there were ~2 fun definition per > module, 4% of the modules accounts for 50% of the fun definitions. > one possible interpretation is that most of this code was produced by > programmers that rarely use funs if they don't have to. obviously, the > methodology is much too weak to *prove* this. i'm not sure it can be > proven, since there's no way to establish who wrote what. most modules > have been touched by at least a handful of people, and everyone has > worked on more than one module. > note also that most of this code sample has been used for years in a > very challenging environment, and is part of one of the most stable > products in its market. > i does seem safe to say that one can write well working Erlang code > while rarely, if ever, using funs (excluding mnesia use, as always). > > mats > From ryanobjc@REDACTED Thu Mar 9 23:56:01 2006 From: ryanobjc@REDACTED (Ryan Rawson) Date: Thu, 9 Mar 2006 14:56:01 -0800 Subject: updates to new rdbms In-Reply-To: <20060309032947.40535400084@mail.omnibis.com> References: <20060309032947.40535400084@mail.omnibis.com> Message-ID: <78568af10603091456t7b55c570u3d21cf4713787e2e@mail.gmail.com> Gmail is the best UI for reading and keeping up to date with a mailing list ever. It's perfect, each thread is one item in the list of "emails" and it remembers which ones you've already read, and you can go back to the beginning of the thread without searching - it's all right there. This particular UI enhancement is _the_ killer app in gmail for me. I also vote for keeping things on list. Filtering can filter out threads you dont participate in and leave those you have in your inbox. -ryan On 3/8/06, Eranga Udesh wrote: > No way. Erlang mailing list is a huge knowledge base. Though even myself > doesn't do through all the mails published here, when I face some problems > or when I need to find info, I also do a query on back published > discussions. It's very important to have them at a centralized archive > (mailing list), so that it's easy to search. > > Anyway as klacke mentioned, may be its good to have thread hiding feature in > MUAs, so that anybody can hide/delete unnecessary thread for individual > preferences. May be it's time to put a Feature request to Thunderbird, M$, > etc. > > Cheers, > - Eranga > > > > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Ulf Wiger (AL/EAB) > Sent: Tuesday, March 07, 2006 8:03 PM > To: Claes Wikstrom > Cc: erlang-questions@REDACTED > Subject: RE: updates to new rdbms > > > If that's a widely held opinion, I > can of course take the discussion elsewhere. > (Or perhaps you didn't mean just this thread, > since there have certainly been longer threads > that may not have had universal appeal?) > > Except there isn't any obvious "elsewhere", > since the trapexit forums are down. > > Creating different mailing lists on the jungerl > sourceforge project would be one option. > > I welcome any suggestions/comments. > > /Ulf W > > > -----Original Message----- > > From: Claes Wikstrom [mailto:klacke@REDACTED] > > Sent: den 7 mars 2006 14:52 > > To: Ulf Wiger (AL/EAB) > > Cc: erlang-questions@REDACTED > > Subject: Re: updates to new rdbms > > > > Ulf Wiger (AL/EAB) wrote: > > > I've checked in a bug fix of rdbms_index, > > > ..... > > > > Sometimes one wish that ones MUA had a feature whereby a mail > > thread/topic could be marked as > > > > "never ever show me mail in this thread .. ever again" > > > > > > /klacke > > > > > > -- > > Claes Wikstrom -- Caps lock is nowhere and > > http://www.hyber.org -- everything is under control > > cellphone: +46 70 2097763 > > > > > From ok@REDACTED Fri Mar 10 01:29:49 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Fri, 10 Mar 2006 13:29:49 +1300 (NZDT) Subject: optimization of list comprehensions Message-ID: <200603100029.k2A0TnmL311563@atlas.otago.ac.nz> now, suppose i was a project manager, and some guy managed to write some excellent C++ code that did just what i wanted, but he didn't use classes. the code's not very snazzy, but fully readable. if i understand you correctly, you consider that guy incomptetent, and i don't. You have forgotten the word "appropriate". If it was *appropriate* to use classes and the guy didn't, he isn't a competent C++ programmer whatever else he is competent at. If an Erlang programmer fails to use funs *appropriately*, he is not yet a competent Erlang programmer. it is perfectly possible to be highly competent in the domain, domain competence is not language competence and write perfectly fine Erlang, *AND* never use funs (except when using mnesia of course). If funs are not used when they are *appropriate*, then it isn't "perfectly fine" Erlang. That's not to deny,m nor have I ever denied, that there are designs to implement in Erlang for which funs are not appropriate. There is one compelling reason to believe that inability or reluctance to use funs in Erlang code really *is* indicative of less competence than one would wish. Good programmers are *lazy* programmers; they don't want to write more than they have to. Good programmers are *self-critical* programmers; they are aware that they make mistakes and consciously adopt strategies to reduce the risk of mistake. For functional languages, higher-order functions are both a way of reducing the amount of code one writes AND a way of reducing the risk of errors in data-structure-walking code. So lazy self-critical programmers quickly catch on to the idea of using higher-order functions in Erlang just like they quickly catch on to the idea of using iterators in C++STL or Java. > That's nice for you. How about trying to convince others? yeah, that's a good idea! but maybe trying to do so will cause me to be flamed by some wierdo, waste a ton of my time, and bore everyone on the mailing list to tears? so i'll give it a miss. Do I scent an insult here? Someone who criticises you politely is not flaming. Someone who disagrees with you is not necessarily a weirdo. If you think that providing evidence for a belief you wish others to adopt is wasting your time, that's up to you. Remember, language features don't come free. *Someone* has to design them. *Someone* has to implement them. *Someone* has to document them. *Someone* has to revised training materials. This is more effort than you might think, so it had better have a high enough payoff. If it is you doing all these things, then it's entirely up to your judgement whether to do it. If you are asking other people to do them for you, you had better give them good reasons. From ok@REDACTED Fri Mar 10 02:24:00 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Fri, 10 Mar 2006 14:24:00 +1300 (NZDT) Subject: optimization of list comprehensions Message-ID: <200603100124.k2A1O0fN323788@atlas.otago.ac.nz> Mats Cronqvist wrote: not to be outdone by wiger (and since o'keefe said please), i ran a few greps on 1,748,162 lines of erlang sources (comments included). i estimate that some 3-400 people have contributed to the code sample. i counted how many times each of the patterns below appeared per module. "[ (]fun[ (]" "lists:fold" "lists:map" "lists:foreach" never 64% 89% 84% 85% <3 times 81% 97% 94% 94% The pattern illustrated for "fun", if it is to be taken seriously as a grep-style pattern, is guaranteed to miss many of the funs in the OTP sources. I have a program called 'm2h' (for Many To Html -- and other output formats) which can be told to extract selected tokens. So find . -name '*.[ehy]rl' -exec m2h -ik "{}" ";" | \ grep fun | wc tells me how many lines contain the keyword 'fun' in the OTP sources. The answer is 3300 funs in 412598 SLOC or about one fun every 125 lines; 3300 funs in 1359 modules or about 2.4 funs per module. There are more *.[ehy]rl files (1675) than modules; unsurprisingly .hrl files tend not to contain funs. > summary(f) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.000 0.000 0.000 1.971 1.000 105.000 > table(f) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1233 112 65 39 41 31 13 15 12 6 14 6 13 10 6 6 16 17 18 19 20 21 22 23 24 26 27 29 30 32 34 36 10 5 4 2 1 1 1 3 2 1 2 2 1 1 3 3 38 39 46 47 56 62 70 78 92 105 1 1 2 1 1 1 1 1 1 1 So about 74% of the OTP R9 source files did not contain even one 'fun'. (For .hrl and .yrl files this is completely unsurprising.) But files that _do_ contain 'funs' tend to contain a lot of them: > summary(f[f>0]) Min. 1st Qu. Median Mean 3rd Qu. Max. 1.000 1.000 4.000 7.468 9.000 105.000 36% of the modules defined at least one fun. however, 15% of all modules had at least one call to mnesia:transaction. so i feel pretty confident stating that ~80% of all modules have no (non-mnesia related) funs. This is not that far from the 75% of OTP source files that have no 'fun's. another observation is that although there were ~2 fun definition per module, 4% of the modules accounts for 50% of the fun definitions. In the OTP R9 sources, files with 15 or more funs accounted for just under 50% of the fun definitions. That's just 53 files, or about 4% of modules. So again, a similar figure. For what it's worth, I find only 906 occurrences of "||" in the OTP R9 sources, so funs are nearly four times as common as list comprehensions. one possible interpretation is that most of this code was produced by programmers that rarely use funs if they don't have to. obviously, the methodology is much too weak to *prove* this. i'm not sure it can be proven, since there's no way to establish who wrote what. most modules have been touched by at least a handful of people, and everyone has worked on more than one module. When it comes to version control systems, I'm a troglodyte. I still use SCCS whenever I can. One of the things SCCS has been able to do since I first used it about 25 years ago is tell you who wrote each line. Surely any decent version control system can do this? note also that most of this code sample has been used for years in a very challenging environment, And this is surely the explanation. Old code does not contain funs because old Erlang didn't *have* funs. They are not described in the classic Erlang book. i does seem safe to say that one can write well working Erlang code while rarely, if ever, using funs (excluding mnesia use, as always). The common characteristics of these two code bodies probably reflect similar histories: old code written before various language features were added, and new features used only in new or changed code. The language has changed. What *was* 'perfectly fine' Erlang isn't any more. It's perfectly fine in that it still works, but new code should not be written that way. For an analogy, consider Fortran. I consider myself an expert Fortran 77 programmer. I have written, and can still write, 'perfectly fine' Fortran 77 code. I wrote some only two months ago. However, that code is *not* 'perfectly fine' Fortran 90, let alone Fortran 95. It uses COMMON blocks instead of MODULEs, the occasional statement function instead of nested procedures, has more GOTOs than modern Fortran needs, and makes no use at all of array expressions, even when they are appropriate (which they often are). To decide whether programmers are using the new language features effectively, we have to look at code written *after* the new language features were described in training material and *after* programmers could trust that their code would not need to run under old releases that didn't support these features. In fact, given that we are talking about source code that has been around for a while and not fixed when it wasn't broken, I draw the opposite conclusion to Mats from his own figures. Hypothesis: The proportion of 'funs' (and list comprehensions) in files is increasing with time, so that 'industrial programmers' *are* taking up the new features, they just aren't rewriting old code for the fun of it. From casper2000a@REDACTED Fri Mar 10 06:00:08 2006 From: casper2000a@REDACTED (Eranga Udesh) Date: Fri, 10 Mar 2006 11:00:08 +0600 Subject: Multi-core Erlang In-Reply-To: Message-ID: <20060310050143.C2256400043@mail.omnibis.com> This is excellent info and very good results. Now I have a hope for my Performance problem. But still I have a problem in Linux due to very high iowait issue (iowait goes over 80-90%, while user CPU usage is below 10-15%). I guess it's due to the broken Linux kernel ver 2.6.x or poor IO performance in HP ML/DL servers which we use. Thanks, - Eranga -----Original Message----- From: owner-erlang-questions@REDACTED [mailto:owner-erlang-questions@REDACTED] On Behalf Of Joe Armstrong (AL/EAB) Sent: Thursday, March 09, 2006 8:23 PM To: Rickard Green S (AS/EAB); erlang-questions@REDACTED Subject: Multi-core Erlang Hello list, Following Rickards post I have now got my hands on a dual core dual processor system - (ie 4 CPUs) and we have been able to reproduce Richards results. I have posted a longer article (with images) on my newly started blog http://www.erlang-stuff.net/blog/ This shows a 3.6 factor speedup for a message passing benchmark and 1.8 for an application program (a SIP stack) - these are good results. The second program in particular was not written to avoid sequential bottlenecks. Despite this it ran 1.8 times faster on a 4 CPU system than on a one CPU system. The nice thing about these results were that the benchmark ran almost 4 times faster - this benchmark just did spawns message passing and computations and had no sequential bottlenecks - pure code made from lots of small processes seems to speed up nicely on a multi-core system. Well done Rickard So how do you make stuff that goes fast? - go parallel Cheers /Joe > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Rickard Green > Sent: den 7 mars 2006 17:52 > To: erlang-questions@REDACTED > Subject: [Fwd: Message passing benchmark on smp emulator] > > Trying again... > > -------- Original Message -------- > Subject: Message passing benchmark on smp emulator > Date: Tue, 07 Mar 2006 17:30:40 +0100 > From: Rickard Green > Newsgroups: erix.mailing-list.erlang-questions > > The message passing benchmark used in estone (and bstone) > isn't very well suited for the smp emulator since it sends a > message in a ring (more or less only 1 process runnable all the time). > > In order to be able to take advantage of an smp emulator I > wrote another message passing benchmark. In this benchmark > all participating processes sends a message to all processes > and waits for replies on the sent messages. > > I've attached the benchmark. Run like this: > big:bang(NoOfParticipatingProcesses). > > I ran the benchmark on a machine with two hyperthreaded Xeon > 2.40GHz processors. > > big:bang(50): > * r10b completed after about 0.014 seconds. > * p11b with 4 schedulers completed after about 0.018 seconds. > > big:bang(100): > * r10b completed after about 0.088 seconds. > * p11b with 4 schedulers completed after about 0.088 seconds. > > big:bang(300): > * r10b completed after about 2.6 seconds. > * p11b with 4 schedulers completed after about 1.0 seconds. > > big:bang(500): > * r10b completed after about 10.7 seconds. > * p11b with 4 schedulers completed after about 3.5 seconds. > > big:bang(600): > * r10b completed after about 18.0 seconds. > * p11b with 4 schedulers completed after about 5.8 seconds. > > big:bang(700): > * r10b completed after about 27.0 seconds. > * p11b with 4 schedulers completed after about 9.3 seconds. > > Quite a good result I guess. > > Note that this is a special case and these kind of speedups > are not expected for an arbitrary Erlang program. > > If you want to try yourself download a P11B snapshot at: > http://www.erlang.org/download/snapshots/ > remember to enable smp support: > ./configure --enable-smp-support --disable-lock-checking > > You can change the number of schedulers used by passing the > +S command line argument to erl or by calling: > erlang:system_flag(schedulers, NoOfSchedulers) -> > {ok|PosixError, CurrentNo, OldNo} > > /Rickard Green, Erlang/OTP > > > From ke.han@REDACTED Fri Mar 10 06:19:28 2006 From: ke.han@REDACTED (ke han) Date: Fri, 10 Mar 2006 13:19:28 +0800 Subject: updates to new rdbms In-Reply-To: <78568af10603091456t7b55c570u3d21cf4713787e2e@mail.gmail.com> References: <20060309032947.40535400084@mail.omnibis.com> <78568af10603091456t7b55c570u3d21cf4713787e2e@mail.gmail.com> Message-ID: On Mar 10, 2006, at 6:56 AM, Ryan Rawson wrote: This is off topic for this thread. But I'm going to have my say in support of keeping things on maillist and not fragment the community. Most all modern MUAs can organize by thread (collapse what you don't want to see) AND they also have filtering rules so you can take all incoming for a maillist and auto move it to a folder specific to that maillist....your inbox stays clean. There is no problem if you spend 5 minutes configuring your local mail folders. Yes, there are alternative solutions. But none that rival the ubiquity of a maillist subscription. The only thing missing with maillist is the ability to subscribe by telling that you want post privileges but not receive mails. This allows you to use gmane or others to read and still be able to post. thanks, ke han > Gmail is the best UI for reading and keeping up to date with a mailing > list ever. It's perfect, each thread is one item in the list of > "emails" and it remembers which ones you've already read, and you can > go back to the beginning of the thread without searching - it's all > right there. > > This particular UI enhancement is _the_ killer app in gmail for me. > > I also vote for keeping things on list. Filtering can filter out > threads you dont participate in and leave those you have in your > inbox. > > -ryan > > On 3/8/06, Eranga Udesh wrote: >> No way. Erlang mailing list is a huge knowledge base. Though even >> myself >> doesn't do through all the mails published here, when I face some >> problems >> or when I need to find info, I also do a query on back published >> discussions. It's very important to have them at a centralized >> archive >> (mailing list), so that it's easy to search. >> >> Anyway as klacke mentioned, may be its good to have thread hiding >> feature in >> MUAs, so that anybody can hide/delete unnecessary thread for >> individual >> preferences. May be it's time to put a Feature request to >> Thunderbird, M$, >> etc. >> >> Cheers, >> - Eranga >> >> >> >> -----Original Message----- >> From: owner-erlang-questions@REDACTED >> [mailto:owner-erlang-questions@REDACTED] On Behalf Of Ulf Wiger >> (AL/EAB) >> Sent: Tuesday, March 07, 2006 8:03 PM >> To: Claes Wikstrom >> Cc: erlang-questions@REDACTED >> Subject: RE: updates to new rdbms >> >> >> If that's a widely held opinion, I >> can of course take the discussion elsewhere. >> (Or perhaps you didn't mean just this thread, >> since there have certainly been longer threads >> that may not have had universal appeal?) >> >> Except there isn't any obvious "elsewhere", >> since the trapexit forums are down. >> >> Creating different mailing lists on the jungerl >> sourceforge project would be one option. >> >> I welcome any suggestions/comments. >> >> /Ulf W >> >>> -----Original Message----- >>> From: Claes Wikstrom [mailto:klacke@REDACTED] >>> Sent: den 7 mars 2006 14:52 >>> To: Ulf Wiger (AL/EAB) >>> Cc: erlang-questions@REDACTED >>> Subject: Re: updates to new rdbms >>> >>> Ulf Wiger (AL/EAB) wrote: >>>> I've checked in a bug fix of rdbms_index, >>>> ..... >>> >>> Sometimes one wish that ones MUA had a feature whereby a mail >>> thread/topic could be marked as >>> >>> "never ever show me mail in this thread .. ever again" >>> >>> >>> /klacke >>> >>> >>> -- >>> Claes Wikstrom -- Caps lock is nowhere and >>> http://www.hyber.org -- everything is under control >>> cellphone: +46 70 2097763 >>> >> >> >> From mats.cronqvist@REDACTED Fri Mar 10 08:53:13 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Fri, 10 Mar 2006 08:53:13 +0100 Subject: optimization of list comprehensions In-Reply-To: <4410A094.2030907@telia.com> References: <44103526.6050603@ericsson.com> <4410A094.2030907@telia.com> Message-ID: <44113069.7000800@ericsson.com> Robert Virding wrote: > You will probably find an even grater usage if you start searching more > seriously. yes, it's by no means a serious study. > I, for one, usually do an import on 'lists' so I don't have > to write the lists: prefix all the time. so do i. but i doubt accounting for -import(lists) would change the numbers significantly. mats From xlcr@REDACTED Fri Mar 10 09:05:37 2006 From: xlcr@REDACTED (Nick Linker) Date: Fri, 10 Mar 2006 14:05:37 +0600 Subject: Erlang/OTP R10B-10 has been released In-Reply-To: References: Message-ID: <44113351.5040207@mail.ru> Did you consider implementing "abstract patterns" (proposed by Richard O'Keefe)? Or maybe you plan to implement them in the next release ;-) Best regards, Linker Nick. From mats.cronqvist@REDACTED Fri Mar 10 10:13:37 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Fri, 10 Mar 2006 10:13:37 +0100 Subject: optimization of list comprehensions In-Reply-To: <200603100124.k2A1O0fN323788@atlas.otago.ac.nz> References: <200603100124.k2A1O0fN323788@atlas.otago.ac.nz> Message-ID: <44114341.5050307@ericsson.com> Richard A. O'Keefe wrote: > [...] > The pattern illustrated for "fun", if it is to be taken seriously as a > grep-style pattern, is guaranteed to miss many of the funs in the OTP > sources. "many"? enough to affect the result significantly? example please. > [...] > > When it comes to version control systems, I'm a troglodyte. I still use > SCCS whenever I can. One of the things SCCS has been able to do since I > first used it about 25 years ago is tell you who wrote each line. Surely > any decent version control system can do this? we're using ClearCase. as far as i know (which admittedly isn't very far) there's no easy way to do that. and even if you could, it would only tell you the username of the person who checked in the file (which is often, but not always, the same as the person who *wrote* the code). > note also that most of this code sample has been used for > years in a very challenging environment, > > And this is surely the explanation. Old code does not contain funs > because old Erlang didn't *have* funs. They are not described in the > classic Erlang book. we have been using mnesia for as long as i can remember. when were real funs (and tuple-funs) introduced? > [...] > The language has changed. What *was* 'perfectly fine' Erlang isn't any > more. It's perfectly fine in that it still works, but new code should > not be written that way. well, i guess i'm a bit more flexible than you in that regard. i think it's ok for people to write C for C++ compilers, FORTRAN77 for Fortran90 compilers, etc, as long as the code works and is readable. of course it would be much *better* if they didn't. > [...] > To decide whether programmers are using the new language features > effectively, we have to look at code written *after* the new language > features were described in training material and *after* programmers > could trust that their code would not need to run under old releases > that didn't support these features. i believe *most* of the code was written well after real funs were introduced. but yes, it would be intersting to have a sample that was started from scratch more recently than the AXD code. there is a huge inertia in adopting new features, because old guys tend to stick to what they know, and new guys learn from the old code. > In fact, given that we are talking about source code that has been > around for a while and not fixed when it wasn't broken, I draw the > opposite conclusion to Mats from his own figures. this was my conclusion; "one can write well working Erlang code while rarely, if ever, using funs". quite a feat to draw the opposite conclusion from my figures. i also stated a hypothesis; "most of this code was produced by programmers that rarely use funs if they don't have to". i'm not even sure what the opposite of that is, but i doubt the figures support it. > Hypothesis: > The proportion of 'funs' (and list comprehensions) in files > is increasing with time, so that 'industrial programmers' > *are* taking up the new features, they just aren't rewriting > old code for the fun of it. this is probably true. of course, it's a non sequitur (or perhaps even a straw man argument?). noone has claimed the opposite, and no figures that i'm aware of says anything about trends over time. mats From ulf.wiger@REDACTED Fri Mar 10 10:20:17 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Fri, 10 Mar 2006 10:20:17 +0100 Subject: optimization of list comprehensions Message-ID: Here's another way to find the info: 1> Loaded = erlang:loaded(). 2> Imps = lists:map(fun(M) -> case beam_lib:chunks(code:which(M),[imports]) of {ok, {_,[{imports,Is}]}} -> {M,Is}; _ -> {M,undefined} end end, Loaded). 3> GoodImps = [X || {_,I} = X <- Imps, I =/= undefined]. 4> {length(GoodImps), NLoaded = length(Loaded)}. 5> UsersOf = fun(MFA) -> L = length([M || {M,I} <- GoodImps, lists:member(MFA,I)]), {L, round(100*L/NLoaded)} end. 6> [{MFA,UsersOf(MFA)} || MFA <- [{lists,foldl,3},{lists,map,2},{lists,foreach,2}]]. In my test system (an erlang node in a product we're currently working on): - Loaded modules: 953 (947 with 'imports' info) - modules using + lists:foldl/3: 145 (15%) + lists:map/2: 168 (18%) + lists:foreach/2: 156 (16%) % more esoteric lists functions + lists:merge/3: 0 (0%) + lists:sort/2: 7 (1%) + lists:umerge/3: 0 (0%) + lists:usort/2: 1 (0%) + lists:zipwith/3: 2 (0%) + lists:zipwith3/4: 0 (0%) % other "advanced functions" + ets:select/2: 20 (2%) + ets:select/1: 5 (1%) + ets:select_delete/1: 3 (0%) Formatting got tiresome. I'll just post the raw data: [{{mnesia,dirty_read,1},{58,6}}, {{mnesia,dirty_read,2},{34,4}}, {{mnesia,dirty_write,1},{73,8}}, {{mnesia,dirty_write,2},{3,0}}] [{{mnesia,activity,2},{8,1}}, {{mnesia,activity,3},{1,0}}, {{mnesia,activity,4},{0,0}}, {{mnesia,transaction,1},{33,3}}, {{mnesia,transaction,2},{2,0}}] Moving on to funs: 7> Locals = lists:map(fun(M) -> case beam_lib:chunks(code:which(M),[locals]) of {ok, {_,[{locals,Ls}]}} -> {M,Ls}; _ -> {M,undefined} end end, Loaded). 8> GoodLocals = [X || {_,I} = X <- Locals, I =/= undefined]. 9> IsAFun = fun({F,_A}) -> hd(atom_to_list(F)) == $- end. 10> HasFuns = [{M,Ls} || {M,Ls} <- GoodLocals, lists:any(IsAFun, Ls)]. 11> length(HasFuns). 499 12> length(GoodLocals). 947 13> IsAnLc = fun({F,_A}) -> regexp:match(atom_to_list(F),"lc\\$") =/= nomatch end. 14> HasLcs = [{M,Ls} || {M,Ls} <- GoodLocals, lists:any(IsAnLc, Ls)]. 15> length(HasLcs). 251 So, 499 modules out of 947 seem to have funs, and 251 of those modules seem to have LCs. Note that among the loaded modules are both OTP modules (from those applications that we use), and our modules. I've not bothered to separate them. /Ulf W > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Mats Cronqvist > Sent: den 10 mars 2006 08:53 > To: erlang-questions@REDACTED > Subject: Re: optimization of list comprehensions > > > > Robert Virding wrote: > > You will probably find an even grater usage if you start searching > > more seriously. > > yes, it's by no means a serious study. > > > I, for one, usually do an import on 'lists' so I don't have > to write > > the lists: prefix all the time. > > so do i. but i doubt accounting for -import(lists) would > change the numbers significantly. > > mats > From mats.cronqvist@REDACTED Fri Mar 10 10:46:34 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Fri, 10 Mar 2006 10:46:34 +0100 Subject: optimization of list comprehensions In-Reply-To: <200603100029.k2A0TnmL311563@atlas.otago.ac.nz> References: <200603100029.k2A0TnmL311563@atlas.otago.ac.nz> Message-ID: <44114AFA.4030809@ericsson.com> Richard A. O'Keefe wrote: > [...] > If funs are not used when they are *appropriate*, then it isn't > "perfectly fine" Erlang. appropriateness is in the eye of the beholder. > [...] > Remember, language features don't come free. > *Someone* has to design them. > *Someone* has to implement them. > *Someone* has to document them. > *Someone* has to revised training materials. > This is more effort than you might think, so it had better have a > high enough payoff. If it is you doing all these things, then it's > entirely up to your judgement whether to do it. If you are asking > other people to do them for you, you had better give them good reasons. i'm well aware that new features don't come free. as a matter of fact, i could find out exactly how much it would cost. if i was certain that new notation would improve our product, i would be having this discussion directly with OTP. as it is, i'm a bit discouraged after noticing how little use of list comprehensions there actually is. mats From thomasl_erlang@REDACTED Fri Mar 10 11:57:08 2006 From: thomasl_erlang@REDACTED (Thomas Lindgren) Date: Fri, 10 Mar 2006 02:57:08 -0800 (PST) Subject: Multi-core Erlang In-Reply-To: Message-ID: <20060310105708.39021.qmail@web34401.mail.mud.yahoo.com> --- "Joe Armstrong (AL/EAB)" wrote: > This shows a 3.6 factor speedup for a message > passing benchmark and > 1.8 for an application program (a SIP stack) - these > are good results. > The > second program in particular was not written to > avoid sequential > bottlenecks. > > Despite this it ran 1.8 times faster on a 4 CPU > system than on a > one CPU system. Was this speedup attained compared to a 1-CPU multithreaded system or compared to a standard sequential system? (I can't tell from your blog entry either.) Best, Thomas __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From bengt.kleberg@REDACTED Fri Mar 10 13:17:15 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Fri, 10 Mar 2006 13:17:15 +0100 Subject: optimization of list comprehensions In-Reply-To: <44114AFA.4030809@ericsson.com> References: <200603100029.k2A0TnmL311563@atlas.otago.ac.nz> <44114AFA.4030809@ericsson.com> Message-ID: <44116E4B.6010605@ericsson.com> On 2006-03-10 10:46, Mats Cronqvist wrote: > > > Richard A. O'Keefe wrote: >> [...] >> If funs are not used when they are *appropriate*, then it isn't >> "perfectly fine" Erlang. > > appropriateness is in the eye of the beholder. i apologise for entering this discussion. i have not had anything to offer before, and even this is probably in the ''better not written'' cathegory. but, imho if somebody is doing a manual recusrsion over a list to ''map/filter/fold'' then it isn't "perfectly fine" Erlang. no matter how appropriate the author of the code think it is. bengt From mats.cronqvist@REDACTED Fri Mar 10 13:34:57 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Fri, 10 Mar 2006 13:34:57 +0100 Subject: optimization of list comprehensions In-Reply-To: <44116E4B.6010605@ericsson.com> References: <200603100029.k2A0TnmL311563@atlas.otago.ac.nz> <44114AFA.4030809@ericsson.com> <44116E4B.6010605@ericsson.com> Message-ID: <44117271.7060605@ericsson.com> Bengt Kleberg wrote: > On 2006-03-10 10:46, Mats Cronqvist wrote: > >> >> >> Richard A. O'Keefe wrote: >> >>> [...] If funs are not used when they are *appropriate*, then it isn't >>> "perfectly fine" Erlang. >> >> >> appropriateness is in the eye of the beholder. > > > i apologise for entering this discussion. i have not had anything to > offer before, and even this is probably in the ''better not written'' > cathegory. > > but, imho if somebody is doing a manual recusrsion over a list to > ''map/filter/fold'' then it isn't "perfectly fine" Erlang. > no matter how appropriate the author of the code think it is. but that depends entirely of what you mean by "perfectly fine". it's not like recursive functions are being obsoleted. it might even be faster to execute. in this particular case, your opinion and my opinion are the same. but they are just opinions, not laws of nature. mats From bengt.kleberg@REDACTED Fri Mar 10 14:00:43 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Fri, 10 Mar 2006 14:00:43 +0100 Subject: optimization of list comprehensions In-Reply-To: <44117271.7060605@ericsson.com> References: <200603100029.k2A0TnmL311563@atlas.otago.ac.nz> <44114AFA.4030809@ericsson.com> <44116E4B.6010605@ericsson.com> <44117271.7060605@ericsson.com> Message-ID: <4411787B.7090306@ericsson.com> On 2006-03-10 13:34, Mats Cronqvist wrote: ...deleted > but that depends entirely of what you mean by "perfectly fine". it's > not like recursive functions are being obsoleted. it might even be > faster to execute. > in this particular case, your opinion and my opinion are the same. but > they are just opinions, not laws of nature. it is possible to introduce sufficient amounts of relativism into (every?) discusion as to render it meaningless. bengt From thomas@REDACTED Fri Mar 10 14:02:56 2006 From: thomas@REDACTED (Thomas Johnsson) Date: Fri, 10 Mar 2006 14:02:56 +0100 Subject: optimization of list comprehensions In-Reply-To: <44114AFA.4030809@ericsson.com> References: <200603100029.k2A0TnmL311563@atlas.otago.ac.nz> <44114AFA.4030809@ericsson.com> Message-ID: <44117900.9050004@skri.net> Hm yes, "small is beautiful" is true perhaps more often than one might think.... perhaps a useful middle of the road approach would be to provide efficient looping implementation of the following pattern lists:foldl(fun(..pat...)-> ... expr.. end, Z, [ list comprehension....] ) and also point that out in the documentation. As an example of "the whole shebang", ie a (purely!) functional language that has a loop construct, c.f. Id, see http://csg.csail.mit.edu/pubs/publications.html , later included in pH (parallel Haskell). Accumulating array comprehensions, anyone? -- Thomas Mats Cronqvist wrote: > > > Richard A. O'Keefe wrote: > >> [...] >> If funs are not used when they are *appropriate*, then it isn't >> "perfectly fine" Erlang. > > > appropriateness is in the eye of the beholder. > >> [...] >> Remember, language features don't come free. >> *Someone* has to design them. >> *Someone* has to implement them. >> *Someone* has to document them. >> *Someone* has to revised training materials. >> This is more effort than you might think, so it had better have a >> high enough payoff. If it is you doing all these things, then it's >> entirely up to your judgement whether to do it. If you are asking >> other people to do them for you, you had better give them good reasons. > > > i'm well aware that new features don't come free. as a matter of > fact, i could find out exactly how much it would cost. > if i was certain that new notation would improve our product, i > would be having this discussion directly with OTP. as it is, i'm a bit > discouraged after noticing how little use of list comprehensions there > actually is. > > mats > From thomas@REDACTED Fri Mar 10 14:05:36 2006 From: thomas@REDACTED (Thomas Johnsson) Date: Fri, 10 Mar 2006 14:05:36 +0100 Subject: optimization of list comprehensions In-Reply-To: <44117900.9050004@skri.net> References: <200603100029.k2A0TnmL311563@atlas.otago.ac.nz> <44114AFA.4030809@ericsson.com> <44117900.9050004@skri.net> Message-ID: <441179A0.6070308@skri.net> Sorry for the noise, the link I meant to include was, more specifically http://csg.lcs.mit.edu/pubs/memos/Memo-284/memo-284-2.pdf --Thomas Thomas Johnsson wrote: > Hm yes, "small is beautiful" is true perhaps more often than one might > think.... > perhaps a useful middle of the road approach would be to provide > efficient looping implementation of the following pattern > lists:foldl(fun(..pat...)-> ... expr.. end, Z, [ list > comprehension....] ) > and also point that out in the documentation. > > As an example of "the whole shebang", ie a (purely!) functional > language that has a loop construct, c.f. Id, see > http://csg.csail.mit.edu/pubs/publications.html , > later included in pH (parallel Haskell). > > Accumulating array comprehensions, anyone? > -- Thomas > > > > Mats Cronqvist wrote: > >> >> >> Richard A. O'Keefe wrote: >> >>> [...] If funs are not used when they are *appropriate*, then it isn't >>> "perfectly fine" Erlang. >> >> >> >> appropriateness is in the eye of the beholder. >> >>> [...] >>> Remember, language features don't come free. >>> *Someone* has to design them. >>> *Someone* has to implement them. >>> *Someone* has to document them. >>> *Someone* has to revised training materials. >>> This is more effort than you might think, so it had better have a >>> high enough payoff. If it is you doing all these things, then it's >>> entirely up to your judgement whether to do it. If you are asking >>> other people to do them for you, you had better give them good reasons. >> >> >> >> i'm well aware that new features don't come free. as a matter of >> fact, i could find out exactly how much it would cost. >> if i was certain that new notation would improve our product, i >> would be having this discussion directly with OTP. as it is, i'm a >> bit discouraged after noticing how little use of list comprehensions >> there actually is. >> >> mats >> > > From mats.cronqvist@REDACTED Fri Mar 10 14:41:55 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Fri, 10 Mar 2006 14:41:55 +0100 Subject: optimization of list comprehensions In-Reply-To: <4411787B.7090306@ericsson.com> References: <200603100029.k2A0TnmL311563@atlas.otago.ac.nz> <44114AFA.4030809@ericsson.com> <44116E4B.6010605@ericsson.com> <44117271.7060605@ericsson.com> <4411787B.7090306@ericsson.com> Message-ID: <44118223.60507@ericsson.com> Bengt Kleberg wrote: > On 2006-03-10 13:34, Mats Cronqvist wrote: > ...deleted > >> but that depends entirely of what you mean by "perfectly fine". it's >> not like recursive functions are being obsoleted. it might even be >> faster to execute. >> in this particular case, your opinion and my opinion are the same. >> but they are just opinions, not laws of nature. > > > it is possible to introduce sufficient amounts of relativism into > (every?) discusion as to render it meaningless. yes, that didn't come out very well did it :> alas, a certain amount of relativism is perhaps unavoidable? consider the code below. i think foo/0 is a completely unacceptable eyesore, whilst bla/1 is merely annoying. i know for a fact that some people think foo/0 is perfectly fine erlang. so how would you propose i argue they're wrong? mats -export([foo/0,bla/1]). -define(FOO,"FOO". foo()-> FOO = ?FOO "FOO". bla(X) -> bla(tl(X),{hd(X)}). bla([H|T],A) -> bla(T,{H,A}); bla([],A) -> A. From bjorn@REDACTED Fri Mar 10 14:49:44 2006 From: bjorn@REDACTED (Bjorn Gustavsson) Date: 10 Mar 2006 14:49:44 +0100 Subject: Erlang/OTP R10B-10 has been released In-Reply-To: <44113351.5040207@mail.ru> References: <44113351.5040207@mail.ru> Message-ID: Abstract patterns is tricky to implement efficiently as it would require inlining across module boundaries. Inlining across modules boundaries with retained semantics for code change is possible to implement, but tricky. Currently, we have no plans to implement abstract patterns. /Bjorn Nick Linker writes: > Did you consider implementing "abstract patterns" (proposed by Richard > O'Keefe)? > > Or maybe you plan to implement them in the next release ;-) > > Best regards, > Linker Nick. > -- Bj?rn Gustavsson, Erlang/OTP, Ericsson AB From Pieter.Rautenbach@REDACTED Fri Mar 10 15:17:09 2006 From: Pieter.Rautenbach@REDACTED (Pieter Rautenbach) Date: Fri, 10 Mar 2006 16:17:09 +0200 Subject: Force registration of node with epmd Message-ID: <88B5DDE8C1A06741B754B910DE2DEFBB881821@HERMES.swistgroup.com> Hello As a novice Erlang programmer I have the following question: Is it possible to register a node with epmd after restarting epmd, if (for some unknown reason) epmd failed to start (or maybe crashed)? Here's how create this scenario: 1. Create new node: erl -sname tmp (Let's assume the Erlang emulator shows tmp@REDACTED at its prompt.) 2. Check that epmd is running and contains this node: epmd -names It should return something like this: epmd: up and running on port 4369 with data: name tmp at port 2581 3. Kill epmd: epmd -kill 4. Restart epmd: epmd 5. Check if there are any registered nodes: epmd -names There should be none. 6. Now, with an unregistered node and epmd running, is there any way to register the node without restarting/recreating the node? I've looked at the net_kernel module, specifically connect_node/1, e.g.: net_kernel:connect_node('tmp@REDACTED'). This returned true, but the node doesn't show up in epmd. Also, if a node is started with -sname, net_kernel:stop/1 returns {error, not_allowed} as the node was not started with net_kernel:start/? in the first place. Any help or comments will be appreciated. Thanks Pieter Rautenbach From hans.r.nilsson@REDACTED Fri Mar 10 15:55:45 2006 From: hans.r.nilsson@REDACTED (Hans Nilsson R (AL/EAB)) Date: Fri, 10 Mar 2006 15:55:45 +0100 Subject: Multi-core Erlang Message-ID: <63E39ADA42BF8B49BEAE3666683A2484022F34EF@esealmw107.eemea.ericsson.se> It was compared to a standard sequential system on one of the CPUs on the same machine. /Hans -----Original Message----- From: owner-erlang-questions@REDACTED [mailto:owner-erlang-questions@REDACTED] On Behalf Of Thomas Lindgren Sent: den 10 mars 2006 11:57 To: erlang-questions@REDACTED Subject: Re: Multi-core Erlang --- "Joe Armstrong (AL/EAB)" wrote: > This shows a 3.6 factor speedup for a message passing benchmark > and > 1.8 for an application program (a SIP stack) - these are good results. > The > second program in particular was not written to avoid sequential > bottlenecks. > > Despite this it ran 1.8 times faster on a 4 CPU system than on a > one CPU system. Was this speedup attained compared to a 1-CPU multithreaded system or compared to a standard sequential system? (I can't tell from your blog entry either.) Best, Thomas __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From ernie.makris@REDACTED Fri Mar 10 16:41:32 2006 From: ernie.makris@REDACTED (Ernie Makris) Date: Fri, 10 Mar 2006 10:41:32 -0500 Subject: Strange memory stats on AMD Opteron_x86_64 on Fedora Core4 Message-ID: <44119E2C.3090907@comcast.net> Hi Erlangers, While using Erlang (BEAM) emulator version 5.4.12 [64-bit] [source] [hipe] [threads:0] I'm seeing some really strange behavior with reporting stats. erlang:memory() [{total,18446548566807571279}, {processes,18446548566799419082}, {processes_used,18446548566799393690}, {system,8152197}, {atom,592425}, {atom_used,560664}, {binary,210749}, {code,5789807}, {ets,441032}] The values for the first couple of minutes or so are correct. Something like this: erlang:memory(). [{total,6266812}, {processes,795266}, {processes_used,786930}, {system,5471546}, {atom,326081}, {atom_used,307415}, {binary,812351}, {code,3007114}, {ets,222344}] Anyone have any ideas? There was a hipe fix in R10B-10. I'm upgrading now, does anyone know if this is the type of problem it solves? Thanks Ernie From kostis@REDACTED Sat Mar 11 10:33:13 2006 From: kostis@REDACTED (Kostis Sagonas) Date: Sat, 11 Mar 2006 10:33:13 +0100 (MET) Subject: Strange memory stats on AMD Opteron_x86_64 on Fedora Core4 In-Reply-To: Mail from 'Ernie Makris ' dated: Fri, 10 Mar 2006 10:41:32 -0500 Message-ID: <200603110933.k2B9XD6W021519@spikklubban.it.uu.se> Ernie Makris wrote: > While using Erlang (BEAM) emulator version 5.4.12 [64-bit] [source] > [hipe] [threads:0] > I'm seeing some really strange behavior with reporting stats. > > erlang:memory() > [{total,18446548566807571279}, > {processes,18446548566799419082}, > {processes_used,18446548566799393690}, > ... > Anyone have any ideas? There was a hipe fix in R10B-10. I'm upgrading > now, does anyone know if this is the type of problem it solves? The problem you are experiencing seems to be the side effect of confusion between 32 vs. 64 bit values somewhere. It is highly unlikely that this is HiPE-related. From my recollection, the fix that you are referring to is not 64-bit specific. We could possibly fix this, but for this we need access to an Erlang program that exhibits the problem. If you send us such a beast, we will most probably send you an appropriate patch. Kostis From johan@REDACTED Sat Mar 11 11:09:19 2006 From: johan@REDACTED (jmT2) Date: Sat, 11 Mar 2006 11:09:19 +0100 Subject: gen_tcp:recv - last segement when closed port Message-ID: I have a problem using gen_tcp:recv(Socket, 0). If (when as it turns out in my application) the server port closes the last segment is never returned. Using Ethereal I can see the last TCP packet containing 1199 byts, but the last segement returned by recv/2 is as usual 1024 bytes. The remaining 175 bytes is never found. The next call to recv/2 will only return {error, closed} :-( I havn't looked too carefully at what is happening but looking at the src from gen_tcp and inet_db it looks like a call to erlang:port_get_data/1 returns {error, closed} allthough there should be data in the buffer. Is this a bug or am I doning something wrong. Do I have to rewrite it using active mode? Johan ----------- Johan Montelius jmT2 From hedeland@REDACTED Sat Mar 11 11:47:57 2006 From: hedeland@REDACTED (Per Hedeland) Date: Sat, 11 Mar 2006 11:47:57 +0100 (CET) Subject: gen_tcp:recv - last segement when closed port In-Reply-To: Message-ID: <200603111047.k2BAlvio051411@tordmule.bluetail.com> jmT2 wrote: > >I have a problem using gen_tcp:recv(Socket, 0). If (when as it turns out >in my application) the server port closes the last segment is never >returned. Using Ethereal I can see the last TCP packet containing 1199 >byts, but the last segement returned by recv/2 is as usual 1024 bytes. The >remaining 175 bytes is never found. The next call to recv/2 will only >return {error, closed} :-( Hm, what's "usual" about 1024 bytes? Are you sure you aren't actually doing gen_tcp:recv(Socket, 1024)? In that case what you see is the documented behaviour. See also the thread starting at: http://www.erlang.org/ml-archive/erlang-questions/200506/msg00116.html --Per Hedeland From johan@REDACTED Sat Mar 11 12:18:32 2006 From: johan@REDACTED (jmT2) Date: Sat, 11 Mar 2006 12:18:32 +0100 Subject: gen_tcp:recv - last segement when closed port In-Reply-To: <200603111047.k2BAlvio051411@tordmule.bluetail.com> References: <200603111047.k2BAlvio051411@tordmule.bluetail.com> Message-ID: This is how I open the socket: gen_tcp:connect(IP, Port, [list, {packet,0}, {active, false}]). And this is how I access the socket: gen_tcp:recv(Socket, 0, 20000) %(high timeout since I'm running over mobile links) I'm reading from a web server that replies with HTTP/1.0 and thus closes the TCP connection once the page is delivered. The page is delivered in five TCP packets: 638 bytes header info 1400 bytes body 1400 bytes body 1270 bytes body 1199 bytes body My read-parse loop reads the stream by calling gen_tcp:recv/3 and gets the following: head--- 638 body--- 1024 body--- 376 (aah the first 1400) body--- 1024 body--- 376 (aah the second 1400) body--- 1024 body--- 246 (aah the third 1270) body--- 1024 - ok this is why I say the usual 1024 body closed - ops where did the remaining 175 bytes go? My guess is that the port is closed while I'm parsing the last 1024 bytes and that the 175 bytes are left in buffer. Is there any way to retreive this? I Klackes example he requested a specific length but I'm only asking for any length. Johan Den 2006-03-11 11:47:57 skrev Per Hedeland : > jmT2 wrote: >> >> I have a problem using gen_tcp:recv(Socket, 0). If (when as it turns out >> in my application) the server port closes the last segment is never >> returned. Using Ethereal I can see the last TCP packet containing 1199 >> byts, but the last segement returned by recv/2 is as usual 1024 bytes. >> The >> remaining 175 bytes is never found. The next call to recv/2 will only >> return {error, closed} :-( > > Hm, what's "usual" about 1024 bytes? Are you sure you aren't actually > doing gen_tcp:recv(Socket, 1024)? In that case what you see is the > documented behaviour. See also the thread starting at: > http://www.erlang.org/ml-archive/erlang-questions/200506/msg00116.html > > --Per Hedeland > -- ----------- Johan Montelius jmT2 From valentin@REDACTED Sat Mar 11 12:51:22 2006 From: valentin@REDACTED (Valentin Micic) Date: Sat, 11 Mar 2006 13:51:22 +0200 Subject: gen_tcp:recv - last segement when closed port References: <200603111047.k2BAlvio051411@tordmule.bluetail.com> Message-ID: <012c01c64502$1f04d7a0$7309fea9@MONEYMAKER2> Could it be that default receive buffer is somehow set to 1K?. If so, try to increase it using: inet:setopts( Sock, [{recbuf, Integer}] ). Valentin. From johan@REDACTED Sat Mar 11 13:00:45 2006 From: johan@REDACTED (jmT2) Date: Sat, 11 Mar 2006 13:00:45 +0100 Subject: gen_tcp:recv - last segement when closed port In-Reply-To: <012c01c64502$1f04d7a0$7309fea9@MONEYMAKER2> References: <200603111047.k2BAlvio051411@tordmule.bluetail.com> <012c01c64502$1f04d7a0$7309fea9@MONEYMAKER2> Message-ID: Den 2006-03-11 12:51:22 skrev Valentin Micic : > Could it be that default receive buffer is somehow set to 1K?. If so, > try to increase it using: > inet:setopts( Sock, [{recbuf, Integer}] ). > > Valentin. > > > Yes! I opened the socket with {recbuf, 1400} and it works. But am I only lucky? .... What will happend if the server manages to send two packets and then closes the port before I have parsed the first one? Thanks for the quick help! Johan -- ----------- Johan Montelius jmT2 From hedeland@REDACTED Sat Mar 11 14:19:59 2006 From: hedeland@REDACTED (Per Hedeland) Date: Sat, 11 Mar 2006 14:19:59 +0100 (CET) Subject: gen_tcp:recv - last segement when closed port In-Reply-To: Message-ID: <200603111319.k2BDJxjY052035@tordmule.bluetail.com> jmT2 wrote: > >This is how I open the socket: > > gen_tcp:connect(IP, Port, [list, {packet,0}, {active, false}]). > >And this is how I access the socket: > > gen_tcp:recv(Socket, 0, 20000) %(high timeout since I'm running over >mobile links) > >I'm reading from a web server that replies with HTTP/1.0 and thus closes >the TCP connection once the page is delivered. The page is delivered in >five TCP packets: > >638 bytes header info >1400 bytes body >1400 bytes body >1270 bytes body >1199 bytes body > >My read-parse loop reads the stream by calling gen_tcp:recv/3 and gets the >following: > >head--- 638 > >body--- 1024 >body--- 376 (aah the first 1400) > >body--- 1024 >body--- 376 (aah the second 1400) > >body--- 1024 >body--- 246 (aah the third 1270) > >body--- 1024 - ok this is why I say the usual 1024 >body closed - ops where did the remaining 175 bytes go? > >My guess is that the port is closed while I'm parsing the last 1024 bytes >and that the 175 bytes are left in buffer. That would be a pretty gross bug - I can't reproduce it though. With the code below, I always get: 3> tcpbug:cli(56857). Got 638 bytes Got 1024 bytes Got 376 bytes Got 1024 bytes Got 376 bytes Got 1024 bytes Got 246 bytes Got 1024 bytes Got 175 bytes Got {error,closed} ok (I see where the 1024 is coming from - rather suboptimal choice of default buffer size in inet_drv. Of course in "most" cases of "bulk" TCP transfer you will probably have more than one packet worth of data available for gen_tcp:recv() and hence get 1024 every time, but a default of at least 1500 would avoid this "pathological" case without any significant problems.) Do you get a different result with that code? What version of Erlang/OTP are you running? Tcpdumping the traffic, I see that this code results in the FIN coming in its own packet rather than being "piggybacked" on the last data packet, which may be different from your case and could possibly be relevant - can you check that? --Per ----------------------- -module(tcpbug). -export([srv/0, cli/1]). srv() -> {ok, S} = gen_tcp:listen(0, []), {ok, P} = inet:port(S), io:format("Listen port: ~w~n", [P]), srv_loop(S). srv_loop(S) -> {ok, A} = gen_tcp:accept(S), lists:foreach(fun(B) -> sleep(2000), % mobile link... gen_tcp:send(A, lists:duplicate(B, $A)) end, [638, 1400, 1400, 1270, 1199]), gen_tcp:close(A), srv_loop(S). cli(P) -> {ok, S} = gen_tcp:connect("localhost", P, [list, {packet,0}, {active, false}]), cli_loop(S). cli_loop(S) -> case gen_tcp:recv(S, 0, 20000) of {ok, L} -> io:format("Got ~w bytes~n", [length(L)]), sleep(500), % parsing... cli_loop(S); R -> io:format("Got ~p~n", [R]) end, gen_tcp:close(S). sleep(T) -> receive after T -> ok end. From hedeland@REDACTED Sat Mar 11 14:55:08 2006 From: hedeland@REDACTED (Per Hedeland) Date: Sat, 11 Mar 2006 14:55:08 +0100 (CET) Subject: gen_tcp:recv - last segement when closed port In-Reply-To: <012c01c64502$1f04d7a0$7309fea9@MONEYMAKER2> Message-ID: <200603111355.k2BDt8Ze052180@tordmule.bluetail.com> "Valentin Micic" wrote: > >Could it be that default receive buffer is somehow set to 1K?. If so, try to >increase it using: >inet:setopts( Sock, [{recbuf, Integer}] ). Well, that a) shouldn't be needed, b) may just hide the bug if there is one (as Johan suggested), and c) may result in really bad performance if you set it to something just big enough to handle a single packet (like Johan did:-). You can avoid c) by using the undocumented 'buffer' option instead of 'recbuf'. (The recbuf/sndbuf options are "supposed" to set the size of the kernel-level buffers where data is stored until collected via the socket - recbuf directly affects the size of the TCP receive window. As a side effect, I'm not really sure why this is done, setting recbuf will also make sure the user-level buffer where data is stored after it has been collected via the socket is at least as big. It's the size of the latter that defaults to 1024 (the kernel-level buffer is generally way bigger), and which may be relevant for Johan's problem. The 'buffer' option will set only the size of the user-level buffer. Of course the idea is that the Erlang programmer should never need to worry about these nitty- gritty details.:-) --Per From johan@REDACTED Sat Mar 11 20:28:15 2006 From: johan@REDACTED (jmT2) Date: Sat, 11 Mar 2006 20:28:15 +0100 Subject: gen_tcp:recv - last segement when closed port In-Reply-To: <200603111319.k2BDJxjY052035@tordmule.bluetail.com> References: <200603111319.k2BDJxjY052035@tordmule.bluetail.com> Message-ID: > Do you get a different result with that code? What version of Erlang/OTP > are you running? V 5.4.13 I downloaded the latest. To see if it did any difference. > Tcpdumping the traffic, I see that this code results in > the FIN coming in its own packet rather than being "piggybacked" on the > last data packet, which may be different from your case and could > possibly be relevant - can you check that? Hmmm, my FIN bit is actually piggybacked on the last HTTP continuation! In the Ethereal trace it is then immediately acked by my client. I've tried to reconstruct the senario with the tcpbug. Tried to set {dealy_send, true} on the server so that the last 1199 bytes should be buffered and only sent when the socket was closed. Hmm, try the below, I get the wriong reply every fifth run: The server is sending one packet with 1500 bytes, the reciever has a recbuf of 100 bytes (don't know how important this is nor why I get 1024 bytes in the first read). 8>tcpbug:cli(3481). Got 1024 bytes Got {error,closed} ok 9> tcpbug:cli(3483). Got 1024 bytes Got 476 bytes Got {error,closed} ok 10> tcpbug:cli(3485). Got 1024 bytes Got 476 bytes Got {error,closed} ok 11> tcpbug:cli(3487). Got 1024 bytes Got {error,closed} ok 12> tcpbug:cli(3489). Got 1024 bytes Got 476 bytes Got {error,closed} ok 13> tcpbug:cli(3491). Got 1024 bytes Got 476 bytes Got {error,closed} ok 14> tcpbug:cli(3493). Got 1024 bytes Got {error,closed} ok 15> 8<------------------------------------------------------------------------ -module(tcpbug). -export([srv/0, cli/1]). srv() -> {ok, S} = gen_tcp:listen(0, [{delay_send, true}]), {ok, P} = inet:port(S), io:format("Listen port: ~w~n", [P]), srv_loop(S). srv_loop(S) -> {ok, A} = gen_tcp:accept(S), gen_tcp:send(A, lists:duplicate(1500, $A)), gen_tcp:close(A). cli(P) -> {ok, S} = gen_tcp:connect("localhost", P, [list, {recbuf, 100}, {packet,0}, {active, false}]), cli_loop(S). cli_loop(S) -> case gen_tcp:recv(S, 0, 40000) of {ok, L} -> io:format("Got ~w bytes~n", [length(L)]), sleep(500), % parsing... cli_loop(S); R -> io:format("Got ~p~n", [R]) end, gen_tcp:close(S). sleep(T) -> receive after T -> ok end. ----------- Johan Montelius jmT2 From johan@REDACTED Sat Mar 11 21:13:54 2006 From: johan@REDACTED (jmT2) Date: Sat, 11 Mar 2006 21:13:54 +0100 Subject: gen_tcp:recv - last segement when closed port In-Reply-To: References: <200603111319.k2BDJxjY052035@tordmule.bluetail.com> Message-ID: Should add that I'm running Windows XP Professional SP2. The behaviour is probably OS specific since it has to do with how tcp buffers and sends packets. Didn't get a Ethereal trace since it the traffic was all local on one machine. > I've tried to reconstruct the senario with the tcpbug. Tried to set > {dealy_send, true} on the server so that the last 1199 bytes should be > buffered and only sent when the socket was closed. Hmm, try the below, I > get the wriong reply every fifth run: ----------- Johan Montelius jmT2 From valentin@REDACTED Sat Mar 11 21:38:54 2006 From: valentin@REDACTED (Valentin Micic) Date: Sat, 11 Mar 2006 22:38:54 +0200 Subject: gen_tcp:recv - last segement when closed port References: <200603111355.k2BDt8Ze052180@tordmule.bluetail.com> Message-ID: <014801c6454b$ef359dd0$7309fea9@MONEYMAKER2> It's been a long time since I've been thinking about this, but as much as I could remember the remote close request is treated as OUT-OF-BAND data, which means that it will bypass anything that exist in/on protocol stack's buffer. So, if there is, say, 1200 octets in buffer, and you read 348, and before you managed to read remaining octets, TCP_CLOSE request arrives, you will lose unread data. This has nothing to do with ERLANG, but with the way tcp works. But then again, it's been a long time, and my memory might not serve me correctly. V. ----- Original Message ----- From: "Per Hedeland" To: Cc: ; Sent: Saturday, March 11, 2006 3:55 PM Subject: Re: gen_tcp:recv - last segement when closed port > "Valentin Micic" wrote: >> >>Could it be that default receive buffer is somehow set to 1K?. If so, try >>to >>increase it using: >>inet:setopts( Sock, [{recbuf, Integer}] ). > > Well, that a) shouldn't be needed, b) may just hide the bug if there is > one (as Johan suggested), and c) may result in really bad performance if > you set it to something just big enough to handle a single packet (like > Johan did:-). You can avoid c) by using the undocumented 'buffer' option > instead of 'recbuf'. > > (The recbuf/sndbuf options are "supposed" to set the size of the > kernel-level buffers where data is stored until collected via the socket > - recbuf directly affects the size of the TCP receive window. As a side > effect, I'm not really sure why this is done, setting recbuf will also > make sure the user-level buffer where data is stored after it has been > collected via the socket is at least as big. It's the size of the latter > that defaults to 1024 (the kernel-level buffer is generally way bigger), > and which may be relevant for Johan's problem. The 'buffer' option will > set only the size of the user-level buffer. Of course the idea is that > the Erlang programmer should never need to worry about these nitty- > gritty details.:-) > > --Per > From hedeland@REDACTED Sat Mar 11 22:15:01 2006 From: hedeland@REDACTED (Per Hedeland) Date: Sat, 11 Mar 2006 22:15:01 +0100 (CET) Subject: gen_tcp:recv - last segement when closed port In-Reply-To: <014801c6454b$ef359dd0$7309fea9@MONEYMAKER2> Message-ID: <200603112115.k2BLF1qc053614@tordmule.bluetail.com> "Valentin Micic" wrote: > >It's been a long time since I've been thinking about this, but as much as I >could remember the remote close request is treated as OUT-OF-BAND data, >which means that it will bypass anything that exist in/on protocol stack's >buffer. So, if there is, say, 1200 octets in buffer, and you read 348, and >before you managed to read remaining octets, TCP_CLOSE request arrives, you >will lose unread data. This has nothing to do with ERLANG, but with the way >tcp works. >But then again, it's been a long time, and my memory might not serve me >correctly. I'm afraid I have to agree with that - your memory is not serving you correctly. TCP promises to deliver a reliable byte stream, and with the socket interface you can read it however you want, and still get all the bytes that the remote wrote before closing. Many protocols rely on this, notably "plain" HTTP/1.0 as in Johan's case. If there is a problem here, it is either in Erlang's inet_drv driver or in the TCP/socket implementation that Johan is using (but the latter is extremely unlikely, even on that particular OS). Maybe you're thinking of TCP reset (RST), which will indeed in common implementations throw away any unread data. And come to think of it, it's not uncommon that "that particular OS" seems to think that RST is the proper way to close a TCP session... Johan, you're not seeing an RST from the server in the traces, are you? --Per From johan@REDACTED Sun Mar 12 08:28:41 2006 From: johan@REDACTED (jmT2) Date: Sun, 12 Mar 2006 08:28:41 +0100 Subject: gen_tcp:recv - last segement when closed port In-Reply-To: <200603112115.k2BLF1qc053614@tordmule.bluetail.com> References: <200603112115.k2BLF1qc053614@tordmule.bluetail.com> Message-ID: Here is the ethereal dump. If you're running a newer Ethereal you will have to change the settings for reassembly of HTTP traffic. In Edit/Pref../Protcols/HTTP you should tick off reassemble TCP etc. The last HTTP packet is #84, see the FIN flag. Johan -- ----------- Johan Montelius jmT2 -------------- next part -------------- A non-text attachment was scrubbed... Name: tcpbug.cap Type: application/octet-stream Size: 21874 bytes Desc: not available URL: From johan@REDACTED Sun Mar 12 09:05:47 2006 From: johan@REDACTED (jmT2) Date: Sun, 12 Mar 2006 09:05:47 +0100 Subject: gen_tcp:recv - last segement when closed port In-Reply-To: <200603112115.k2BLF1qc053614@tordmule.bluetail.com> References: <200603112115.k2BLF1qc053614@tordmule.bluetail.com> Message-ID: How does the gen_tcp:recv/2 work? It calls inet_db:lookup_socket/1 to see where tehsocket belongs or ....? I guess this is where it detects that the socket is closed? What happens in erlang:get_port_data/1? I guss the problem is in how WinSock is used. Johan gen_tcp:-------------------- recv(S, Length) when port(S) -> case inet_db:lookup_socket(S) of {ok, Mod} -> Mod:recv(S, Length); Error -> Error end. inet_db:--------------------- lookup_socket(Socket) when port(Socket) -> case catch erlang:port_get_data(Socket) of Module when atom(Module) -> {ok, Module}; _ -> {error, closed} end. ----------- Johan Montelius jmT2 From csanto@REDACTED Sun Mar 12 10:59:58 2006 From: csanto@REDACTED (Corrado Santoro) Date: Sun, 12 Mar 2006 10:59:58 +0100 Subject: gen_tcp:recv - last segement when closed port In-Reply-To: <200603112115.k2BLF1qc053614@tordmule.bluetail.com> References: <200603112115.k2BLF1qc053614@tordmule.bluetail.com> Message-ID: <4413F11E.8050402@diit.unict.it> Hi all, as far as I remember, sometimes ago I had a similar problem in a C program, so, even if Per argue that TCP promises a reliable byte stream (and this is true), I think that there is a sort of strange behaviour in handling disconnection conditions (I experienced this problem on Linux). I solved the problem by readling a byte at time, but undoubtedly this is not an elegant solution. It could be related to the fact that, if you have 300 final bytes in the buffer and perform a read requesting 1024 bytes, the socket implementation says "no, I can't give you the buffer you request because the socket has been closed". --Corrado Per Hedeland ha scritto: > "Valentin Micic" wrote: >> It's been a long time since I've been thinking about this, but as much as I >> could remember the remote close request is treated as OUT-OF-BAND data, >> which means that it will bypass anything that exist in/on protocol stack's >> buffer. So, if there is, say, 1200 octets in buffer, and you read 348, and >> before you managed to read remaining octets, TCP_CLOSE request arrives, you >> will lose unread data. This has nothing to do with ERLANG, but with the way >> tcp works. >> But then again, it's been a long time, and my memory might not serve me >> correctly. > > I'm afraid I have to agree with that - your memory is not serving you > correctly. TCP promises to deliver a reliable byte stream, and with the > socket interface you can read it however you want, and still get all the > bytes that the remote wrote before closing. Many protocols rely on this, > notably "plain" HTTP/1.0 as in Johan's case. If there is a problem here, > it is either in Erlang's inet_drv driver or in the TCP/socket > implementation that Johan is using (but the latter is extremely > unlikely, even on that particular OS). > > Maybe you're thinking of TCP reset (RST), which will indeed in common > implementations throw away any unread data. And come to think of it, > it's not uncommon that "that particular OS" seems to think that RST is > the proper way to close a TCP session... Johan, you're not seeing an RST > from the server in the traces, are you? > > --Per From valentin@REDACTED Sun Mar 12 14:13:47 2006 From: valentin@REDACTED (Valentin Micic) Date: Sun, 12 Mar 2006 15:13:47 +0200 Subject: gen_tcp:recv - last segement when closed port References: <200603112115.k2BLF1qc053614@tordmule.bluetail.com> Message-ID: <018b01c645d7$04447bc0$7309fea9@MONEYMAKER2> > Maybe you're thinking of TCP reset (RST), No I was thinking about FIN-1, and yes, I was wrong, as far as protocol is concerned . Although, this is not relevant to the case in question, consider a sequence diagram below: A --------512By------------>B ---- reads 128By -------> App A <------64------------------B <----------64By---------- App B -----reads 128By -------> App A ----------FIN-1-----------> B <-----------64By--------- App A <--------64By--------------| A ---------RESET----------> B Remaining 256 Bytes are lost . . . If application read only fraction of the data available, as per your assertion, arrival of FIN-1 shall not inform App about remote closure. This leaves App free to send any kind of data back to the remote peer. However, when remote peer receives such data, considering that it is in FIN-WAIT state, it has no choice but to reply with RESET, and hence force receive buffer on App side to flush all the data. I had a situation like this many years ago, and assocciated the loss of data to close request, while infact, it was App sending data causing remote peer to issue RESET request, and hence flushing the tcp buffer on App side. Thank you for making me think it through :-). V. ----- Original Message ----- From: "Per Hedeland" To: Cc: ; Sent: Saturday, March 11, 2006 11:15 PM Subject: Re: gen_tcp:recv - last segement when closed port > "Valentin Micic" wrote: >> >>It's been a long time since I've been thinking about this, but as much as >>I >>could remember the remote close request is treated as OUT-OF-BAND data, >>which means that it will bypass anything that exist in/on protocol stack's >>buffer. So, if there is, say, 1200 octets in buffer, and you read 348, and >>before you managed to read remaining octets, TCP_CLOSE request arrives, >>you >>will lose unread data. This has nothing to do with ERLANG, but with the >>way >>tcp works. >>But then again, it's been a long time, and my memory might not serve me >>correctly. > > I'm afraid I have to agree with that - your memory is not serving you > correctly. TCP promises to deliver a reliable byte stream, and with the > socket interface you can read it however you want, and still get all the > bytes that the remote wrote before closing. Many protocols rely on this, > notably "plain" HTTP/1.0 as in Johan's case. If there is a problem here, > it is either in Erlang's inet_drv driver or in the TCP/socket > implementation that Johan is using (but the latter is extremely > unlikely, even on that particular OS). > > Maybe you're thinking of TCP reset (RST), which will indeed in common > implementations throw away any unread data. And come to think of it, > it's not uncommon that "that particular OS" seems to think that RST is > the proper way to close a TCP session... Johan, you're not seeing an RST > from the server in the traces, are you? > > --Per > From hedeland@REDACTED Sun Mar 12 16:44:47 2006 From: hedeland@REDACTED (Per Hedeland) Date: Sun, 12 Mar 2006 16:44:47 +0100 (CET) Subject: gen_tcp:recv - last segement when closed port In-Reply-To: <4413F11E.8050402@diit.unict.it> Message-ID: <200603121544.k2CFiljs057488@tordmule.bluetail.com> Corrado Santoro wrote: > >as far as I remember, sometimes ago I had a similar problem in a C >program, so, even if Per argue that TCP promises a reliable byte stream >(and this is true), I think that there is a sort of strange behaviour in >handling disconnection conditions (I experienced this problem on Linux). >I solved the problem by readling a byte at time, but undoubtedly this is >not an elegant solution. And sometimes we come up with "solutions" that just happen to avoid the problem, which frequently leads to incorrect conclusions about what the problem really was, not to mention the incorrect conclusion that what we came up with actually *was* a *solution*... >It could be related to the fact that, if you have 300 final bytes in the >buffer and perform a read requesting 1024 bytes, the socket >implementation says "no, I can't give you the buffer you request because >the socket has been closed". Are you talking about Erlang sockets or C/Unix/POSIX ones now? What you describe is the semantics of gen_tcp:recv() (see the thread I referenced earlier), but for the C/Unix/POSIX socket API there is certainly no such "fact". If there are 300 bytes available, and you ask for 1024, you will get the 300 regardless of whether the connection has been closed or not (the *socket file descriptor* will of course not be closed until you actively close() it). --Per From hedeland@REDACTED Sun Mar 12 17:14:46 2006 From: hedeland@REDACTED (Per Hedeland) Date: Sun, 12 Mar 2006 17:14:46 +0100 (CET) Subject: gen_tcp:recv - last segement when closed port In-Reply-To: <018b01c645d7$04447bc0$7309fea9@MONEYMAKER2> Message-ID: <200603121614.k2CGEkuA057594@tordmule.bluetail.com> "Valentin Micic" wrote: > >If application read only fraction of the data available, as per your >assertion, arrival of FIN-1 shall not inform App about remote closure. This >leaves App free to send any kind of data back to the remote peer. However, >when remote peer receives such data, considering that it is in FIN-WAIT >state, it has no choice but to reply with RESET, and hence force receive >buffer on App side to flush all the data. Mostly a nitpick for the case you describe, but no, this isn't true. It is perfectly possible to continue to receive in FIN-WAIT - in fact this is fundamental for how "plain" HTTP/1.0 (i.e. without persistent connections, Content-Length, and other fancy stuff) works: The client sends the request and immediately closes the sending side of the connection (using shutdown()), which causes FIN to be sent and FIN-WAIT-1 to be entered. The server reads the request until the end-of-file indication that is the user-level-visible result of FIN having been received, and then sends the response (by then the client is likely in FIN-WAIT-2), and likewise immediately closes (normally with a "full" close()). The client finally reads the response until the end-of-file indication etc. Anyone who thinks that this procedure might lead to loss of data (assuming correctly functioning applications and available network connectivity) will have a hard time explaining the initial success of the web... In your case however there had presumably been a "full" close() at the remote end - the difference isn't really reflected in the TCP state machine - in which case the remote must indeed send RST in response to incoming data, to inform the sender that the data will not be delivered. > I had a situation like this many >years ago, and assocciated the loss of data to close request, while infact, >it was App sending data causing remote peer to issue RESET request, and >hence flushing the tcp buffer on App side. This sounds like a broken application-level protocol though - App shouldn't be sending data if the remote isn't prepared to receive it... --Per From robert.virding@REDACTED Sun Mar 12 18:10:53 2006 From: robert.virding@REDACTED (Robert Virding) Date: Sun, 12 Mar 2006 18:10:53 +0100 Subject: Search function at erlang.org Message-ID: <4414561D.7000802@telia.com> What is wrong with the search function at erlang.org? When I try it I get: "The server encountered an internal error or misconfiguration and was unable to complete your request." Robert From hedeland@REDACTED Sun Mar 12 19:04:55 2006 From: hedeland@REDACTED (Per Hedeland) Date: Sun, 12 Mar 2006 19:04:55 +0100 (CET) Subject: gen_tcp:recv - last segement when closed port In-Reply-To: Message-ID: <200603121804.k2CI4tre057995@tordmule.bluetail.com> jmT2 wrote: > >How does the gen_tcp:recv/2 work? It calls inet_db:lookup_socket/1 to see >where tehsocket belongs or ....? I guess this is where it detects that the >socket is closed? What happens in erlang:get_port_data/1? I guss the >problem is in how WinSock is used. The interesting stuff in this case happens in the inet driver (erts/emulator/drivers/common/inet_drv.c - the Erlang-side interaction is in prim_inet:recv()), in particular I believe inet_drv should not close the port when there is remaining data, and the {error, closed} should only happen in response to a RECV request for a socket that has {active, false} - i.e. the case in lookup_socket shouldn't come into play. Anyway, I wrote a little C program that quite faithfully reproduces the packet trace you posted, including the piggybacked FIN - still no "luck", I always get all the data with the Erlang client code. So, I would guess that it's either a bug in WinSock or (as you suggest) in inet_drv's usage of WinSock, neither of which I know anything about. You could try reproducing with that program (below) - of course it won't compile on Windows, but it should work on at least FreeBSD and Linux. Note that the inter-packet spacing is dropped to 200 ms (approximately what was in your trace), so if you use the tcpbug.erl client code you probably want to drop the "parse time" to 100 ms or so. --Per ----------------------------- #include #include #include #include #include #include #include #include #if defined(TCP_NOPUSH) #define OPTNAME TCP_NOPUSH #elif defined(TCP_CORK) #define OPTNAME TCP_CORK #else #error "Can't work here" #endif int size[] = {638, 1400, 1400, 1270, 1199, 0}; char buf[1500]; struct timespec ms200 = {0, 200000000}; main() { int s, a, i, val = 1; struct sockaddr_in sa; socklen_t sa_len = sizeof(sa); s = socket(PF_INET, SOCK_STREAM, 0); memset(&sa, '\0', sizeof(sa)); sa.sin_family = AF_INET; sa.sin_addr.s_addr = htonl(INADDR_ANY); sa.sin_port = 0; bind(s, (struct sockaddr *)&sa, sa_len); listen(s, 1); getsockname(s, (struct sockaddr *)&sa, &sa_len); printf("Listen port: %d\n", ntohs(sa.sin_port)); memset(buf, 'a', sizeof(buf)); while (1) { a = accept(s, NULL, NULL); for (i = 0; size[i] != 0; i++) { nanosleep(&ms200, NULL); if (size[i+1] == 0) setsockopt(a, IPPROTO_TCP, OPTNAME, &val, sizeof(val)); write(a, buf, size[i]); } close(a); } } From valentin@REDACTED Sun Mar 12 22:02:07 2006 From: valentin@REDACTED (Valentin Micic) Date: Sun, 12 Mar 2006 23:02:07 +0200 Subject: gen_tcp:recv - last segement when closed port References: <200603121614.k2CGEkuA057594@tordmule.bluetail.com> Message-ID: <01a801c64618$39bfd330$7309fea9@MONEYMAKER2> > Anyone who thinks that this procedure might lead to loss of data > (assuming correctly functioning applications and available network > connectivity) will have a hard time explaining the initial success of > the web... How about this: client opens a connection, issues a request, receives a reply and than closes the connection. To quote from RFC 2616, paragraph 8.1.2.1: An HTTP/1.1 client MAY expect a connection to remain open, but would decide to keep it open based on whether the response from a server contains a Connection header with the connection-token *close*. In case the client does not want to maintain a connection for more than that request, it SHOULD send a Connection header including the connection-token *close*. See, not that hard at all ;-) V. From ok@REDACTED Sun Mar 12 22:46:03 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Mon, 13 Mar 2006 10:46:03 +1300 (NZDT) Subject: Erlang/OTP R10B-10 has been released Message-ID: <200603122146.k2CLk3hS336587@atlas.otago.ac.nz> Bjorn Gustavsson wrote: Abstract patterns is tricky to implement efficiently as it would require inlining across module boundaries. Sorry, but as the inventor of abstract patterns, I can tell you that this is QUITE UNTRUE. Abstract patterns were designed so that they can be implemented *either* by inline code *or* by out-of-line code. At least one version of the paper explained how to do this. What's more, records *also* require inlining across module boundaries, it's just that the modules involved as called ".hrl files" and the inlining is done by the preprocessor. Inlining across modules boundaries with retained semantics for code change is possible to implement, but tricky. I repeat, you are ALREADY doing this for records and macros; the only difference is that with records and macros you didn't even TRY to get the semantics for code change right. Several years ago I proposed splitting -import into -import and -import_early, where importing a module early would create a recorded dependency between the two modules so that if the imported module were reloaded the corresponding version of the importing module would need reloading too. This would permit inlining of stuff declared in -import_early modules. Wouldn't this be a problem in practice? No, it would be a step forward because this one-way dependency ALREADY exists between .hrl files and the .erl files that include them. So telling the system what the xxxx is going on would create no worse dependency problem that already exists and would actually make life easier by enabling the system to tell you when you tried to make an inconsistent reload. Not that this matters, because ABSTRACT PATTERNS DO NOT NEED INLINING. They were consciously and explicitly designed *not* to have the same problems as records. Currently, we have no plans to implement abstract patterns. I would be happy to discuss this particular or any other apparent obstacle to implementing them. From jahakala@REDACTED Sun Mar 12 23:06:13 2006 From: jahakala@REDACTED (Jani Hakala) Date: Mon, 13 Mar 2006 00:06:13 +0200 Subject: gen_tcp:recv - last segement when closed port In-Reply-To: <200603111319.k2BDJxjY052035@tordmule.bluetail.com> (Per Hedeland's message of "Sat, 11 Mar 2006 14:19:59 +0100 (CET)") References: <200603111319.k2BDJxjY052035@tordmule.bluetail.com> Message-ID: <87slpnpbca.fsf@pingviini.kortex.jyu.fi> "Per Hedeland" writes: > 3> tcpbug:cli(56857). > Got 638 bytes > Got 1024 bytes > Got 376 bytes > Got 1024 bytes > Got 376 bytes > Got 1024 bytes > Got 246 bytes > Got 1024 bytes > Got 175 bytes > Got {error,closed} > ok > I tested tcpbug:cli in linux (R10B-9) and windows xp sp2 (R10B-10, [1]). In linux the last 175 byte packet is received but not in windows. It doesn't seem to matter if the tcpbug:srv is running on linux or windows. Jani Hakala [1] compiled with mingw32 i.e. not an official version From erlang@REDACTED Sun Mar 12 23:46:31 2006 From: erlang@REDACTED (Michael McDaniel) Date: Sun, 12 Mar 2006 14:46:31 -0800 Subject: OTP-5101 inets badmatch in R10B-10 Message-ID: <20060312224630.GH32539@delora.autosys.us> Here is the fix that was thought to have been done early in the R10B series: " OTP-5101 A programming error could cause a badmatch in the http-client when the http response was chunk decoded. " I still have this problem in R10B-10 which has been discussed before. See these threads: http://www.erlang.org/ml-archive/erlang-questions/200511/msg00147.html and http://www.erlang.org/ml-archive/erlang-questions/200511/msg00161.html Perhaps it is because I am running over SSL ? QUESTIONS: 1) Will ibrowse be included in OTP in R11 ? 2) Will the http-client be corrected in R11 to eliminate this problem? thanks, ~Michael From hedeland@REDACTED Mon Mar 13 00:35:17 2006 From: hedeland@REDACTED (Per Hedeland) Date: Mon, 13 Mar 2006 00:35:17 +0100 (CET) Subject: gen_tcp:recv - last segement when closed port In-Reply-To: <01a801c64618$39bfd330$7309fea9@MONEYMAKER2> Message-ID: <200603122335.k2CNZHw0059564@tordmule.bluetail.com> "Valentin Micic" wrote: > >> Anyone who thinks that this procedure might lead to loss of data >> (assuming correctly functioning applications and available network >> connectivity) will have a hard time explaining the initial success of >> the web... > >How about this: client opens a connection, issues a request, receives a >reply and than closes the connection. >To quote from RFC 2616, paragraph 8.1.2.1: > > >An HTTP/1.1 client MAY expect a connection to remain open, but would decide >to keep it open based on whether the response from a server contains a >Connection header with the connection-token *close*. In case the client does >not want to maintain a connection for more than that request, it SHOULD send >a Connection header including the connection-token *close*. > > >See, not that hard at all ;-) I'm afraid I have no idea what you're trying to say here - did you actually read the message you're replying to? I described the HTTP request/response procedure *without* persistent connections - they didn't exist at "the initial success of the web", far less HTTP/1.1. (Though it seems I was wrong about the request being terminated by "simplex close" from the client, even if that would certainly have been a possibility - but RFC 1945 effectively specifies server close as the only reliable way to know when the response ends.) --Per From valentin@REDACTED Mon Mar 13 01:38:18 2006 From: valentin@REDACTED (Valentin Micic) Date: Mon, 13 Mar 2006 02:38:18 +0200 Subject: gen_tcp:recv - last segement when closed port References: <200603122335.k2CNZHw0059564@tordmule.bluetail.com> Message-ID: <01d801c64636$6cbde380$7309fea9@MONEYMAKER2> Did I read your message -- yes I did. Maybe I could ask the same question? I was not describing persistent connection -- client closes the connection after receiving the response. The point I've been trying to make was that "initial success of web" had to rely on something more consistent -- the argument is reinforced in HTTP/1.1, assuming that it builds on weaknesses identified in predecessor... not that anything was wrong with server closing the connection, however, the quote I've included implies that practice did not work as planned. If one put Microsoft into equation (and world was more Microsoft then than it is today) that always had their own set of ideas on how world should work -- it is quite feasible that server closure created some problems. More I think of it, more convinced I get that Connection header has been introduced to accommodate Mickeysoft, whilst persistent connection was just a side effect ;-). Thus, the rest of the world uses *close*, and IE uses *keepalive* by default. But I'd like to end this debate, as I'm finding myself more wanting to prove you wrong, than to contribute meaningfully -- and I don't like doing that. Can we agree that world is not ideal place, and that TCP, in all its implementations, is far from being perfect. V. ----- Original Message ----- From: "Per Hedeland" To: Cc: Sent: Monday, March 13, 2006 1:35 AM Subject: Re: gen_tcp:recv - last segement when closed port > "Valentin Micic" wrote: >> >>> Anyone who thinks that this procedure might lead to loss of data >>> (assuming correctly functioning applications and available network >>> connectivity) will have a hard time explaining the initial success of >>> the web... >> >>How about this: client opens a connection, issues a request, receives a >>reply and than closes the connection. >>To quote from RFC 2616, paragraph 8.1.2.1: >> >> >>An HTTP/1.1 client MAY expect a connection to remain open, but would >>decide >>to keep it open based on whether the response from a server contains a >>Connection header with the connection-token *close*. In case the client >>does >>not want to maintain a connection for more than that request, it SHOULD >>send >>a Connection header including the connection-token *close*. >> >> >>See, not that hard at all ;-) > > I'm afraid I have no idea what you're trying to say here - did you > actually read the message you're replying to? I described the HTTP > request/response procedure *without* persistent connections - they > didn't exist at "the initial success of the web", far less HTTP/1.1. > (Though it seems I was wrong about the request being terminated by > "simplex close" from the client, even if that would certainly have been > a possibility - but RFC 1945 effectively specifies server close as the > only reliable way to know when the response ends.) > > --Per > From ok@REDACTED Mon Mar 13 01:42:21 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Mon, 13 Mar 2006 13:42:21 +1300 (NZDT) Subject: optimization of list comprehensions Message-ID: <200603130042.k2D0gLSU339880@atlas.otago.ac.nz> Mats Cronqvist wrote: yes, that didn't come out very well did it :> alas, a certain amount of relativism is perhaps unavoidable? Consider the case of temperature. Two people may disagree about whether the temperature is too high (even a husband and wife sharing a bed may disagree about that), but if they get out their thermometers they will agree about what the temperature *is*. And if you get to know someone well (e.g. if you are married to them for 15 years) each of them gets to know whether the other will like a particular temperature or not. So just because there are different preferences does not mean that there is any unavoidable relativism. consider the code below. i think foo/0 is a completely unacceptable eyesore, whilst bla/1 is merely annoying. i know for a fact that some people think foo/0 is perfectly fine erlang. so how would you propose i argue they're wrong? You wouldn't because what they think is really at least *two* things: - the properties of fragment X are such and such [this is objective] - I like that [this is opinion] It may indeed be at least three things: - the properties of fragment X are such and such [objective] - I have such and such goals [about themselves] - those properties support those goals [objective/empirical] -export([foo/0,bla/1]). Already I think that failing to put a space after a comma is a bad thing. Property: there is no space after the comma. Goals: readability. Support: spaces improve readability. To argue that I am "wrong" about this you would have to show either that I was wrong to have readability as a goal or that as an empirical matter spaces did not improve readability. (By the way, I'm following the Ada Quality and Style Guidelines here. That remains my very favourite style guide for code because of its copious rationales.) -define(FOO,"FOO". foo()-> FOO = ?FOO "FOO". Here one could argue along these lines: (1) People make mistakes. (2) If you can make it easy for a tool to help you catch mistakes without hurting anything else too much, that's a good thing, (3) Mismatched parentheses are a fairly common kind of mistake. (4) You don't *have* to leave out the ")" in that macro: -define(FOO, "FOO"). works as well or better. One could also appeal to Grice's maxims of conversation, and point out that (5) Code that mentions a variable when it doesn't need to is harder to read than code that doesn't, because it makes the reader stop and try to figure out *why* the maxims have been violated. So -define(FOO, "FOO"). foo() -> ?FOO "FOO". does everything that the original did, in fewer tokens, with less confusion. Surely that's an argument? Then one can make the point that tracing and test coverage are important debugging tools, so that foo1() -> "FOO". foo() -> foo1() ++ "FOO". allows foo1() to be instrumented in ways that ?FOO cannot. If your interlocutor does not value such things, then you just have to wait for experience to teach them otherwise. bla(X) -> bla(tl(X),{hd(X)}). bla([H|T],A) -> bla(T,{H,A}); bla([],A) -> A. This has one overwhelming advantage over code that uses macros: you can actually *read* it because the code is *there* for you to read. But I have more trouble *believing* this code than the other. I can more easily imagine someone wanting "FOOFOO" than I can imagine someone wanting to turn [1,2,3] into {3,{2,{1}}}. Apparently we are both annoyed by bla/1, and I am sure we would agree about what the code *is*, but I wonder whether we are annoyed by the same things? - Spacing? - The lack of intention-revealing names? [If you want to argue with someone about that, there's a book by Henry Ledgard and Joh Tauer. (Two volumes, both paperback.) My copy went for a walk some years ago and never came back, so I'm not perfectly sure of the title, but I *think* the two volumes are 1. Professional Software: Software Engineering Concepts 2. Professional Software: Programming Practice. As I recall it, there was a good discussion of good naming practice in one of those volumes, probably the second. ] - The absence of comments? - The strange data type? -type t(T) -> {T} | {T,t(T)}. - The fact that the strange data type doesn't have an "empty" case? From ok@REDACTED Sun Mar 12 22:32:42 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Mon, 13 Mar 2006 10:32:42 +1300 (NZDT) Subject: optimization of list comprehensions Message-ID: <200603122132.k2CLWgUV336605@atlas.otago.ac.nz> !valof 3300 - 2907 I wrote: > The pattern illustrated for "fun", if it is to be taken seriously as a > grep-style pattern, is guaranteed to miss many of the funs in the OTP > sources. Predictably, Mats Cronqvist challenged this: "many"? enough to affect the result significantly? example please. m2h -ik -lerl /tmp/ERL | grep -w "fun" | wc -l => 3300 fgrep "fun(" /tmp/ERL | wc -l => 2907 /tmp/ERL is a copy of the R9something OTP sources with comment stripped out. The "fun(" pattern misses 393 occurrences of the "fun" keyword, which is nearly 12% of the total. To me, nearly 400 occurrences may fairly be called "many", and a 12% error is "significant". Others may make other judgements. > The language has changed. What *was* 'perfectly fine' Erlang isn't any > more. It's perfectly fine in that it still works, but new code should > not be written that way. well, i guess i'm a bit more flexible than you in that regard. i think it's ok for people to write C for C++ compilers, FORTRAN77 for Fortran90 compilers, etc, as long as the code works and is readable. of course it would be much *better* if they didn't. It is "perfectly fine" to do something but it would be "much *better*" if they didn't? To me that's a contradiction. By the way, the C++ standard has *PAGES* (I mean *LOTS* of pages) of incompatibilities between C and C++. If I want to use C code as part of a C++ system, I compile it with a C compiler, because a C++ compiler *must* interpret it in subtly different (and therefore subtly buggy) ways. Anybody writing C for C++ compilers is not just asking for trouble but screaming for it like a Siamese cat on heat. > In fact, given that we are talking about source code that has been > around for a while and not fixed when it wasn't broken, I draw the > opposite conclusion to Mats from his own figures. this was my conclusion; "one can write well working Erlang code while rarely, if ever, using funs". quite a feat to draw the opposite conclusion from my figures. No, the conclusion I'm talking about was the one claiming that "industrial" programmers cannot or will not use funs. Your evidence shows, on the contrary, that they can and will once funs are there, reliable, efficient, and explained to them. > Hypothesis: > The proportion of 'funs' (and list comprehensions) in files > is increasing with time, so that 'industrial programmers' > *are* taking up the new features, they just aren't rewriting > old code for the fun of it. this is probably true. of course, it's a non sequitur (or perhaps even a straw man argument?). noone has claimed the opposite, and no figures that i'm aware of says anything about trends over time. But you *did* claim that "industrial" programmers were staying away from funs; that's what the whole thread is about! From cschatz@REDACTED Mon Mar 13 02:28:51 2006 From: cschatz@REDACTED (Charles F. Schatz) Date: Mon, 13 Mar 2006 14:28:51 +1300 Subject: Problem configuring otp_src_R10B-10/erts Message-ID: <001c01c6463d$7d1b7a60$6001a8c0@linkworks.lan> Building Erlang OTP nder QNX 6.3, I get the following error: ./configure checking build system type... i386-pc-nto-qnx6.3.0 checking host system type... i386-pc-nto-qnx6.3.0 checking for gcc... gcc checking for C compiler default output... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ANSI C... none needed checking for library containing strerror... none required checking for gcc... (cached) gcc checking whether we are using the GNU C compiler... (cached) yes checking whether gcc accepts -g... (cached) yes checking for gcc option to accept ANSI C... (cached) none needed checking for mixed cygwin and native VC++ environment... no checking how to run the C preprocessor... gcc -E checking for ranlib... ranlib checking for bison... bison -y checking for perl5... no checking for perl... /usr/bin/perl checking whether ln -s works... yes checking for ar... ar checking for rm... /bin/rm checking for mkdir... /bin/mkdir checking for a BSD-compatible install... /opt/bin/install -c checking how to create a directory including parents... /opt/bin/install -c -d checking for extra flags needed to export symbols... none none checking for sin in -lm... no checking for dlopen in -ldl... no checking for main in -linet... no checking for egrep... grep -E checking for ANSI C header files... no checking for sys/types.h... no checking for sys/stat.h... no checking for stdlib.h... no checking for string.h... no checking for memory.h... no checking for strings.h... no checking for inttypes.h... no checking for stdint.h... no checking for unistd.h... no checking for native win32 threads... no checking for pthread_create in -lpthread... no checking for pthread_create in -lc_r... no checking for pthread_create in -lc... no checking if the '-pthread' switch can be used... no checking whether the emulator should use threads... no checking for tgetent in -lncurses... no checking for tgetent in -lcurses... no checking for tgetent in -ltermcap... no checking for tgetent in -ltermlib... no configure: error: No curses library functions found There is a problem with $ERL_TOP/erts/configure not passing the LDFLAGS environment variable for test compiles, so ALL library tests fail. I modified files $ERL_TOP/erts/aclocal.m4 and $ERL_TOP/configure.in in order to locate pthread_create in the standard C runtime library. The $ERL_TOP/configure.in produces a configure that works, wheras the $ERL_TOP/erts/configure.in does not. At the top of file erts/configure.in: AC_PREREQ(2.13) AC_INIT(vsn.mk) Commenting out AC_PREREQ(2.13), or changing it to AC_PREREQ(2.57) (the actual autoconf version on the compiling machine) has no effect on the above result. I have also installed autoconf 2.13 and gotten incompatibility results with all the configuration files. Any ideas anyone? Cheers, Charles F. Schatz From ke.han@REDACTED Mon Mar 13 07:04:03 2006 From: ke.han@REDACTED (ke han) Date: Mon, 13 Mar 2006 14:04:03 +0800 Subject: repos for darwin-x86 ? Message-ID: I have tried using the latest repos 1.4b2 on OS X iMac Intel. When I run erl (with the path set correctly, I get the following output: > which erl /Users/jhancock/repos/bin/erl jhancock@REDACTED ~ > erl /Users/jhancock/repos/bin/erl: line 30: /Users/jhancock/repos/erlang/ erts-5.4.12-darwin-x86/bin/erlexec: No such file or directory /Users/jhancock/repos/bin/erl: line 30: exec: /Users/jhancock/repos/ erlang/erts-5.4.12-darwin-x86/bin/erlexec: cannot execute: No such file or directory So, repos has a darwin-powerpc directory but not one for darwin-x86. Seems that your run scripts are correct, just missing the build ;-) If the reason you don't have a darwin-x86 build is for lack of a machine, I can do the build for you if you want to walk me through the process. thanks, ke han From raimo@REDACTED Mon Mar 13 09:05:11 2006 From: raimo@REDACTED (Raimo Niskanen) Date: 13 Mar 2006 09:05:11 +0100 Subject: Search function at erlang.org References: <4414561D.7000802@telia.com> Message-ID: That would be the search of the online mailing list archives, right? We ran into problems when setting up the new machine serving erlang.org, so the search function is unfortunately broken. Therefore we replaced the toplevel search at erlang.se (and erlang.org) with google search links. We will probably do the same with the mailing list archives search. robert.virding@REDACTED (Robert Virding) writes: > What is wrong with the search function at erlang.org? When I try it I get: > > "The server encountered an internal error or misconfiguration and was > unable to complete your request." > > Robert > -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From hedeland@REDACTED Mon Mar 13 11:04:33 2006 From: hedeland@REDACTED (Per Hedeland) Date: Mon, 13 Mar 2006 11:04:33 +0100 (CET) Subject: gen_tcp:recv - last segement when closed port In-Reply-To: <01d801c64636$6cbde380$7309fea9@MONEYMAKER2> Message-ID: <200603131004.k2DA4Xve061792@tordmule.bluetail.com> "Valentin Micic" wrote: > >Did I read your message -- yes I did. Maybe I could ask the same question? >I was not describing persistent connection -- client closes the connection >after receiving the response. You described a scenario (and quoted the corresponding RFC) that not just presumes the existence of persistent connections, but where they are the default - hence the need to use 'Connection: close' when you don't want them. The point here is that without a reliable Content-Length: header, "after receiving the response" <=> "the server has closed the connection". >The point I've been trying to make was that "initial success of web" had to >rely on something more consistent Good luck in making that point - since it provably didn't. There *are* problems with this approach, see the caveat in my statement - but they are not attributes of TCP or the standard socket API. > More I think of >it, more convinced I get that Connection header has been introduced to >accommodate Mickeysoft, whilst persistent connection was just a side effect >;-). Well, conspiracy theories and M$-bashing is always fun, but I think you need a reality check here... > Thus, the rest of the world uses *close*, and IE uses *keepalive* by >default. ...and here. *Everything* uses persistent connections today. >But I'd like to end this debate, as I'm finding myself more wanting to prove >you wrong, than to contribute meaningfully -- and I don't like doing that. >Can we agree that world is not ideal place, and that TCP, in all its >implementations, is far from being perfect. Sure - but it's not relevant here. --Per From bjorn@REDACTED Mon Mar 13 15:08:27 2006 From: bjorn@REDACTED (Bjorn Gustavsson) Date: 13 Mar 2006 15:08:27 +0100 Subject: Abstract patterns In-Reply-To: <200603122146.k2CLk3hS336587@atlas.otago.ac.nz> References: <200603122146.k2CLk3hS336587@atlas.otago.ac.nz> Message-ID: If you tell me where I can find the latest version of the paper, I'll have another look at it. What I meant by the phrase "implement efficiently" is that performance should be comparable to that of records. If that indeed would be possible to achive without inlining, we would be more interested in implementing abstract patterns. /Bjorn "Richard A. O'Keefe" writes: > Bjorn Gustavsson wrote: > Abstract patterns is tricky to implement efficiently as it would > require inlining across module boundaries. > > Sorry, but as the inventor of abstract patterns, I can tell you > that this is QUITE UNTRUE. Abstract patterns were designed so that > they can be implemented *either* by inline code *or* by out-of-line > code. At least one version of the paper explained how to do this. > > What's more, records *also* require inlining across module boundaries, > it's just that the modules involved as called ".hrl files" and the > inlining is done by the preprocessor. > > Inlining across modules boundaries with retained semantics for > code change is possible to implement, but tricky. > > I repeat, you are ALREADY doing this for records and macros; the only > difference is that with records and macros you didn't even TRY to get > the semantics for code change right. > > Several years ago I proposed splitting -import into -import and > -import_early, where importing a module early would create a recorded > dependency between the two modules so that if the imported module were > reloaded the corresponding version of the importing module would need > reloading too. This would permit inlining of stuff declared in > -import_early modules. Wouldn't this be a problem in practice? No, > it would be a step forward because this one-way dependency ALREADY > exists between .hrl files and the .erl files that include them. So > telling the system what the xxxx is going on would create no worse > dependency problem that already exists and would actually make life > easier by enabling the system to tell you when you tried to make an > inconsistent reload. > > Not that this matters, because ABSTRACT PATTERNS DO NOT NEED INLINING. > They were consciously and explicitly designed *not* to have the same > problems as records. > > Currently, we have no plans to implement abstract patterns. > > I would be happy to discuss this particular or any other apparent > obstacle to implementing them. > -- Bj?rn Gustavsson, Erlang/OTP, Ericsson AB From ulf.wiger@REDACTED Mon Mar 13 16:52:47 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Mon, 13 Mar 2006 16:52:47 +0100 Subject: Abstract patterns Message-ID: Bjorn Gustavsson wrote: > > What I meant by the phrase "implement efficiently" is that > performance should be comparable to that of records. If that > indeed would be possible to achive without inlining, we would > be more interested in implementing abstract patterns. But why is that a requirement in the first place? I can imagine that being able to use abstract patterns would greatly simplify some applications, where using records isn't really an alternative today anyway. What would be _sufficient_ performance in order for abstract patterns to be justified? /Ulf W From david.nospam.hopwood@REDACTED Mon Mar 13 18:22:05 2006 From: david.nospam.hopwood@REDACTED (David Hopwood) Date: Mon, 13 Mar 2006 17:22:05 +0000 Subject: Abstract patterns In-Reply-To: References: <200603122146.k2CLk3hS336587@atlas.otago.ac.nz> Message-ID: <4415AA3D.5030101@blueyonder.co.uk> Bjorn Gustavsson wrote: > If you tell me where I can find the latest version of the paper, > I'll have another look at it. > > What I meant by the phrase "implement efficiently" is that performance > should be comparable to that of records. If that indeed would be possible > to achive without inlining, we would be more interested in implementing ^^^^^^^^^^^^^^^^ > abstract patterns. This seems like an odd way of deciding whether and how to implement a language feature. The preferred process IMHO is something like: 1. Design the feature, paying attention (among other criteria) to whether it *would* be feasible to implement efficiently given a bit of effort. 2. Document the feature. 3. Implement the feature naively. 4. Gain experience with how people are using the feature, and a body of code on which optimizations can be tested. 5. Optimize the implementation. in that order. -- David Hopwood From sean.hinde@REDACTED Mon Mar 13 19:56:35 2006 From: sean.hinde@REDACTED (Sean Hinde) Date: Mon, 13 Mar 2006 18:56:35 +0000 Subject: Abstract patterns In-Reply-To: References: <200603122146.k2CLk3hS336587@atlas.otago.ac.nz> Message-ID: On 13 Mar 2006, at 14:08, Bjorn Gustavsson wrote: > If you tell me where I can find the latest version of the paper, > I'll have another look at it. > > What I meant by the phrase "implement efficiently" is that performance > should be comparable to that of records. If that indeed would be > possible > to achive without inlining, we would be more interested in > implementing > abstract patterns. I thought the idea was as much about providing a nice interface into all kind of abstract data types, not just as a record replacement. It opens the door to things like a queue data structure that allows test for empty queue in a function header without having to break the data abstraction. One reservation is around the syntax for adding and extracting multiple entries to/from an abstract pattern. It is a bit cumbersome (as noted in the version of the paper posted on the mailing list some time back): L2 = L#log{blocked_by = none, status = ok} which is arguably prettier than L2 = L#log_blocked_by(none)#log_status(ok) #log{status == S, users = N} = L which is arguably prettier than #log_status(S) = #log_users(N) = L In any case I am very much in favour of this initiative. Sean > > /Bjorn > > "Richard A. O'Keefe" writes: > >> Bjorn Gustavsson wrote: >> Abstract patterns is tricky to implement efficiently as it would >> require inlining across module boundaries. >> >> Sorry, but as the inventor of abstract patterns, I can tell you >> that this is QUITE UNTRUE. Abstract patterns were designed so that >> they can be implemented *either* by inline code *or* by out-of-line >> code. At least one version of the paper explained how to do this. >> >> What's more, records *also* require inlining across module >> boundaries, >> it's just that the modules involved as called ".hrl files" and the >> inlining is done by the preprocessor. >> >> Inlining across modules boundaries with retained semantics for >> code change is possible to implement, but tricky. >> >> I repeat, you are ALREADY doing this for records and macros; the only >> difference is that with records and macros you didn't even TRY to get >> the semantics for code change right. >> >> Several years ago I proposed splitting -import into -import and >> -import_early, where importing a module early would create a recorded >> dependency between the two modules so that if the imported module >> were >> reloaded the corresponding version of the importing module would need >> reloading too. This would permit inlining of stuff declared in >> -import_early modules. Wouldn't this be a problem in practice? No, >> it would be a step forward because this one-way dependency ALREADY >> exists between .hrl files and the .erl files that include them. So >> telling the system what the xxxx is going on would create no worse >> dependency problem that already exists and would actually make life >> easier by enabling the system to tell you when you tried to make an >> inconsistent reload. >> >> Not that this matters, because ABSTRACT PATTERNS DO NOT NEED >> INLINING. >> They were consciously and explicitly designed *not* to have the same >> problems as records. >> >> Currently, we have no plans to implement abstract patterns. >> >> I would be happy to discuss this particular or any other apparent >> obstacle to implementing them. >> > > -- > Bj?rn Gustavsson, Erlang/OTP, Ericsson AB From matthias@REDACTED Mon Mar 13 22:48:58 2006 From: matthias@REDACTED (Matthias Lang) Date: Mon, 13 Mar 2006 22:48:58 +0100 Subject: the Hopwood design process (was: Abstract patterns) In-Reply-To: <4415AA3D.5030101@blueyonder.co.uk> References: <200603122146.k2CLk3hS336587@atlas.otago.ac.nz> <4415AA3D.5030101@blueyonder.co.uk> Message-ID: <17429.59594.621322.594417@antilipe.corelatus.se> David Hopwood writes: > This seems like an odd way of deciding whether and how to implement a > language feature. The preferred process IMHO is something like: Your process specifically leaves out anything along the lines of "decide that the feature is more trouble than it's worth, give up". Using your design process, you end up documenting, implenting and optimising ("in that order") every idea, be it brilliant or harebrained, which drops into your inbox. Matthias > 1. Design the feature, paying attention (among other criteria) to whether > it *would* be feasible to implement efficiently given a bit of effort. > 2. Document the feature. > 3. Implement the feature naively. > 4. Gain experience with how people are using the feature, and a body of > code on which optimizations can be tested. > 5. Optimize the implementation. > > in that order. From david.nospam.hopwood@REDACTED Tue Mar 14 01:16:22 2006 From: david.nospam.hopwood@REDACTED (David Hopwood) Date: Tue, 14 Mar 2006 00:16:22 +0000 Subject: the Hopwood design process In-Reply-To: <17429.59594.621322.594417@antilipe.corelatus.se> References: <200603122146.k2CLk3hS336587@atlas.otago.ac.nz> <4415AA3D.5030101@blueyonder.co.uk> <17429.59594.621322.594417@antilipe.corelatus.se> Message-ID: <44160B56.9050501@blueyonder.co.uk> Matthias Lang wrote: > David Hopwood writes: > > > This seems like an odd way of deciding whether and how to implement a > > language feature. The preferred process IMHO is something like: > > Your process specifically leaves out anything along the lines of > "decide that the feature is more trouble than it's worth, give up". I deliberately didn't say anything about the criteria on which features should be included (because that would require a book, not a short post). In practice, once it has been decided to implement a feature, it will stay in the language forever, even if it is never optimized. Instances of features being *removed* from a programming language in the course of its incremental development are vanishingly rare. Partly this is because of backward compatibility, but mainly it is because the cost of retaining a feature is rather low. The mechanism by which features fall out of general use is by designing new languages, that don't have the features that were considered "more trouble than [they're] worth". But it is only possible with hindsight to assess which features those are. All language design is an ongoing experiment, and it should be expected that each language will have some features that are more useful than others. Actually, I'm in favour of placing the bar very high on introduction of new features. But it is possible to be too conservative. A situation where no new features are added can be almost fatal to a language's popularity: a large proportion of a language's potential user community will consider it to be dead if it isn't being actively revised. This is a rational response to the fact that most programming languages have insufficient resources supporting their development. > Using your design process, you end up documenting, implenting and > optimising ("in that order") every idea, be it brilliant or > harebrained, which drops into your inbox. Now you're just attacking a straw man. It is obvious that each of the steps will only happen if there are people motivated to do them. (Conversely, at least in the case of a language with an open spec and open-source implementation(s), each step *will* happen if someone decides to do it.) Most ideas will never be developed to the stage where they could be implemented, because no-one is sufficiently motivated to do the work needed for steps 1 and 2. If you look at the effort Joe has put into the design of abstract patterns, it is clear that at least step 1 has been completed, to a high standard IMHO, in that case. > > 1. Design the feature, paying attention (among other criteria) to whether > > it *would* be feasible to implement efficiently given a bit of effort. > > 2. Document the feature. > > 3. Implement the feature naively. > > 4. Gain experience with how people are using the feature, and a body of > > code on which optimizations can be tested. > > 5. Optimize the implementation. > > > > in that order. -- David Hopwood From david.nospam.hopwood@REDACTED Tue Mar 14 01:48:08 2006 From: david.nospam.hopwood@REDACTED (David Hopwood) Date: Tue, 14 Mar 2006 00:48:08 +0000 Subject: the Hopwood design process In-Reply-To: <44160B56.9050501@blueyonder.co.uk> References: <200603122146.k2CLk3hS336587@atlas.otago.ac.nz> <4415AA3D.5030101@blueyonder.co.uk> <17429.59594.621322.594417@antilipe.corelatus.se> <44160B56.9050501@blueyonder.co.uk> Message-ID: <441612C8.9040008@blueyonder.co.uk> David Hopwood wrote: > Most ideas will never be developed to the stage where they could be implemented, > because no-one is sufficiently motivated to do the work needed for steps 1 and 2. > If you look at the effort Joe Richard, I meant. > has put into the design of abstract patterns, it is clear that at least step 1 > has been completed, to a high standard IMHO, in that case. -- David Hopwood From bengt.kleberg@REDACTED Tue Mar 14 09:29:14 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Tue, 14 Mar 2006 09:29:14 +0100 Subject: the Hopwood design process In-Reply-To: <44160B56.9050501@blueyonder.co.uk> References: <200603122146.k2CLk3hS336587@atlas.otago.ac.nz> <4415AA3D.5030101@blueyonder.co.uk> <17429.59594.621322.594417@antilipe.corelatus.se> <44160B56.9050501@blueyonder.co.uk> Message-ID: <44167EDA.4000506@ericsson.com> On 2006-03-14 01:16, David Hopwood wrote: ...deleted > Actually, I'm in favour of placing the bar very high on introduction of new > features. But it is possible to be too conservative. A situation where no > new features are added can be almost fatal to a language's popularity: a > large proportion of a language's potential user community will consider it > to be dead if it isn't being actively revised. This is a rational response i think it is possible to avoid this problem (''dead if it isn't being actively revised''). have a very active development of the languages libraries/frameworks/etc. let the language develop new features very slowly. bengt From klacke@REDACTED Tue Mar 14 09:39:29 2006 From: klacke@REDACTED (Claes Wikstrom) Date: Tue, 14 Mar 2006 09:39:29 +0100 Subject: the Hopwood design process In-Reply-To: <44160B56.9050501@blueyonder.co.uk> References: <200603122146.k2CLk3hS336587@atlas.otago.ac.nz> <4415AA3D.5030101@blueyonder.co.uk> <17429.59594.621322.594417@antilipe.corelatus.se> <44160B56.9050501@blueyonder.co.uk> Message-ID: <44168141.9090900@hyber.org> David Hopwood wrote: > Instances of features being > *removed* from a programming language in the course of its incremental development > are vanishingly rare. Partly this is because of backward compatibility, but > mainly it is because the cost of retaining a feature is rather low. I know of at least two language features/constructs that I implemented for Erlang many years ago _and_ also removed them. Thought I'd tell the list about them - just to tell you what you all could have had. - process migration Bin = save_process(Pid), RemotePid ! {newproc, Bin}, and then at the other end receive {newproc, Bin} -> Pid = restore_process(Bin), - nukeable arrays T = {1,2,3,4,5,6}, T2 = nuke_element(3, T, newval), Once T had been poked, any future read/write references to T would render a badarg crash. The nuke_elem/3 BIF would destructively update the tuple and ensure that the old value could never be used. Both features - removed. /klacke -- Claes Wikstrom -- Caps lock is nowhere and http://www.hyber.org -- everything is under control cellphone: +46 70 2097763 From bjorn@REDACTED Tue Mar 14 10:30:39 2006 From: bjorn@REDACTED (Bjorn Gustavsson) Date: 14 Mar 2006 10:30:39 +0100 Subject: Abstract patterns In-Reply-To: <4415AA3D.5030101@blueyonder.co.uk> References: <200603122146.k2CLk3hS336587@atlas.otago.ac.nz> <4415AA3D.5030101@blueyonder.co.uk> Message-ID: Actually, I think that you have answered that one yourself in a later email: :-) > In practice, once it has been decided to implement a feature, it will stay in > the language forever, even if it is never optimized. Instances of features being > *removed* from a programming language in the course of its incremental development > are vanishingly rare. Partly this is because of backward compatibility, but > mainly it is because the cost of retaining a feature is rather low. That's why we think twice (or thrice) before actually starting to implement a new feature. Also, we don't want to be stuck with (more) half-implemented features, so if we are not sure that stage 3 would be enough or that we would ever reach stage 5, we don't start. Regarding abstract patterns, our decision was not to never implement it, but to not implement it NOW. /Bjorn David Hopwood writes: > Bjorn Gustavsson wrote: > > If you tell me where I can find the latest version of the paper, > > I'll have another look at it. > > > > What I meant by the phrase "implement efficiently" is that performance > > should be comparable to that of records. If that indeed would be possible > > to achive without inlining, we would be more interested in implementing > ^^^^^^^^^^^^^^^^ > > abstract patterns. > > This seems like an odd way of deciding whether and how to implement a > language feature. The preferred process IMHO is something like: > > 1. Design the feature, paying attention (among other criteria) to whether > it *would* be feasible to implement efficiently given a bit of effort. > 2. Document the feature. > 3. Implement the feature naively. > 4. Gain experience with how people are using the feature, and a body of > code on which optimizations can be tested. > 5. Optimize the implementation. > > in that order. > > -- > David Hopwood > -- Bj?rn Gustavsson, Erlang/OTP, Ericsson AB From bjorn@REDACTED Tue Mar 14 10:41:39 2006 From: bjorn@REDACTED (Bjorn Gustavsson) Date: 14 Mar 2006 10:41:39 +0100 Subject: the Hopwood design process In-Reply-To: <44168141.9090900@hyber.org> References: <200603122146.k2CLk3hS336587@atlas.otago.ac.nz> <4415AA3D.5030101@blueyonder.co.uk> <17429.59594.621322.594417@antilipe.corelatus.se> <44160B56.9050501@blueyonder.co.uk> <44168141.9090900@hyber.org> Message-ID: I think that confirms David's statement that it is vanishingly rare that features are removed. At the time you removed your features, the user base must have been much smaller than it is now. Nowadays we have trouble removing even minor (mis)features. During the time that I have worked with Erlang/OTP (from Dec 1996), I can only remember that we have removed two major features: - Zombie processes. They turned out to be not as useful as originally thought. - Vectors. Similar to your nukeable arrays, but there was an exception list to allow updates in a functional way. Removed because of disappointing performance. /Bjorn Claes Wikstrom writes: > David Hopwood wrote: > > Instances of features being > > *removed* from a programming language in the course of its incremental development > > are vanishingly rare. Partly this is because of backward compatibility, but > > mainly it is because the cost of retaining a feature is rather low. > > I know of at least two language features/constructs that I implemented > for Erlang many years ago _and_ also removed them. [...] -- Bj?rn Gustavsson, Erlang/OTP, Ericsson AB From thomasl_erlang@REDACTED Tue Mar 14 13:10:50 2006 From: thomasl_erlang@REDACTED (Thomas Lindgren) Date: Tue, 14 Mar 2006 04:10:50 -0800 (PST) Subject: the Hopwood design process In-Reply-To: Message-ID: <20060314121050.50691.qmail@web36704.mail.mud.yahoo.com> --- Bjorn Gustavsson wrote: > I think that confirms David's statement that it is > vanishingly rare > that features are removed. At the time you removed > your features, the > user base must have been much smaller than it is > now. Nowadays we have > trouble removing even minor (mis)features. > > During the time that I have worked with Erlang/OTP > (from Dec 1996), > I can only remember that we have removed two major > features: > > - Zombie processes. They turned out to be not as > useful as originally > thought. > > - Vectors. Similar to your nukeable arrays, but > there was an exception list > to allow updates in a functional way. Removed > because of disappointing > performance. (The performance problems were due to GC integration troubles, weren't they? I seem to recall a mention of some extremely painful patch with full garbage collection done at every update or something along those lines?) Anyway, let me add two things to the new features discussion. First, pragmatically speaking, it might be useful to release some features as "experimental", with the explicit proviso that they can be modified or perhaps even removed in the future, until classified as "stable". This allows for community feedback without locking into a given design beforehand. (Another option is to have multiple erlang versions, though, except for me, there doesn't seem to be a lot of enthusiasm for this :-). Second, and more generally, I think new features would fare better if we had some language principles to tell us what new features strengthen the language, and which diffuse it. The original erlang fits together quite well and is elegant and easy to learn and use. Adding features piecemeal means one runs the risk of ending up with Frankenstein's Monster, if you see what I mean. Maybe I'm really asking for a Benevolent Dictator For Life (aka, chief architect). Best, Thomas __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From bertil.karlsson@REDACTED Tue Mar 14 13:21:30 2006 From: bertil.karlsson@REDACTED (Bertil Karlsson) Date: Tue, 14 Mar 2006 13:21:30 +0100 Subject: OTP-5101 inets badmatch in R10B-10 In-Reply-To: <20060312224630.GH32539@delora.autosys.us> References: <20060312224630.GH32539@delora.autosys.us> Message-ID: <4416B54A.4070606@ericsson.com> Hi, 1) No 2) I will have a new look at that problem and hopefully have it fixed to R11. If you have more info about your problem with inets, pleace let me know. /Bertil Michael McDaniel wrote: > Here is the fix that was thought to have been done early in the R10B > series: > > " > OTP-5101 A programming error could cause a badmatch in the http-client > when the http response was chunk decoded. > " > > I still have this problem in R10B-10 which has been discussed before. > See these threads: > > http://www.erlang.org/ml-archive/erlang-questions/200511/msg00147.html > > and > > http://www.erlang.org/ml-archive/erlang-questions/200511/msg00161.html > > > Perhaps it is because I am running over SSL ? > > > QUESTIONS: > > 1) Will ibrowse be included in OTP in R11 ? > 2) Will the http-client be corrected in R11 to eliminate this problem? > > > thanks, > > ~Michael > From chandrashekhar.mullaparthi@REDACTED Tue Mar 14 14:28:12 2006 From: chandrashekhar.mullaparthi@REDACTED (chandru) Date: Tue, 14 Mar 2006 13:28:12 +0000 Subject: the Hopwood design process In-Reply-To: <20060314121050.50691.qmail@web36704.mail.mud.yahoo.com> References: <20060314121050.50691.qmail@web36704.mail.mud.yahoo.com> Message-ID: On 14/03/06, Thomas Lindgren wrote: > > Anyway, let me add two things to the new features > discussion. > > First, pragmatically speaking, it might be useful to > release some features as "experimental", with the > explicit proviso that they can be modified or perhaps > even removed in the future, until classified as > "stable". This allows for community feedback without > locking into a given design beforehand. This is quite easily done when new libraries are being introduced which do not need any special support from the runtime system. But when you are talking about a new language feature such as this I guess it becomes very hard because you've got to change the compiler, beam, debugger and heavens knows what else - all in one go. To do that in a non commital way can be very hard? Chandru From chandrashekhar.mullaparthi@REDACTED Tue Mar 14 15:12:18 2006 From: chandrashekhar.mullaparthi@REDACTED (chandru) Date: Tue, 14 Mar 2006 14:12:18 +0000 Subject: JMS Message-ID: One of our IT systems wanted to talk to one of my erlang systems using JMS. So I thought I can implement the wire protocol of JMS and got it's spec from the Sun website. Here is an extract. 1.1 Abstract JMS provides a common way for Java programs to create, send, receive and read an enterprise messaging system's messages. 1.2.4 What JMS Does Not Include Load Balancing/Fault Tolerance Error/Advisory Notification Administration Security Wire Protocol Message Type Repository Now, what is the point of a f*ng messaging system if the wire protocol, load balancing, fault tolerance aren't specified. And then users moan that different vendor implementations of JMS do not interoperate! Duh! Sorry about the rant. I'm working from home today and I had to vent my frustration somehow :-) From thomasl_erlang@REDACTED Tue Mar 14 15:22:52 2006 From: thomasl_erlang@REDACTED (Thomas Lindgren) Date: Tue, 14 Mar 2006 06:22:52 -0800 (PST) Subject: the Hopwood design process In-Reply-To: Message-ID: <20060314142252.78705.qmail@web36709.mail.mud.yahoo.com> --- chandru wrote: > On 14/03/06, Thomas Lindgren > wrote: > > > > Anyway, let me add two things to the new features > > discussion. > > > > First, pragmatically speaking, it might be useful > to > > release some features as "experimental", with the > > explicit proviso that they can be modified or > perhaps > > even removed in the future, until classified as > > "stable". This allows for community feedback > without > > locking into a given design beforehand. > > This is quite easily done when new libraries are > being introduced > which do not need any special support from the > runtime system. But > when you are talking about a new language feature > such as this I guess > it becomes very hard because you've got to change > the compiler, beam, > debugger and heavens knows what else - all in one > go. To do that in a > non commital way can be very hard? Well, all it means is that OTP reserves the right to change how an "experimental" feature works (in particular semantics, which I guess is the sticky part). Backwards compatibility not guaranteed. So, it's more of an organizational/social/community issue than an implementation/coding thing. Does that make sense? Best, Thomas __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From erlang@REDACTED Tue Mar 14 17:11:23 2006 From: erlang@REDACTED (Michael McDaniel) Date: Tue, 14 Mar 2006 08:11:23 -0800 Subject: OTP-5101 inets badmatch in R10B-10 In-Reply-To: <4416B54A.4070606@ericsson.com> References: <20060312224630.GH32539@delora.autosys.us> <4416B54A.4070606@ericsson.com> Message-ID: <20060314161123.GT32539@delora.autosys.us> Thank you. What information may I provide different than the details in the earlier threads? I will do my best to provide whatever information you need to correct the problem. If you need to reproduce the problem exactly as I reproduce it (or have a complete tcp/ethereal dump) then I will need some time receiving written permissions from my client (and of course we will need to take this off-list ). ~Michael On Tue, Mar 14, 2006 at 01:21:30PM +0100, Bertil Karlsson wrote: > Hi, > > 1) No > 2) I will have a new look at that problem and hopefully have it fixed to > R11. > If you have more info about your problem with inets, pleace let me know. > > /Bertil > > > Michael McDaniel wrote: > > Here is the fix that was thought to have been done early in the R10B > > series: > > > >" > > OTP-5101 A programming error could cause a badmatch in the http-client > > when the http response was chunk decoded. > >" > > > > I still have this problem in R10B-10 which has been discussed before. > > See these threads: > > > > http://www.erlang.org/ml-archive/erlang-questions/200511/msg00147.html > > > > and > > > > http://www.erlang.org/ml-archive/erlang-questions/200511/msg00161.html > > > > > > Perhaps it is because I am running over SSL ? > > > > > > QUESTIONS: > > > > 1) Will ibrowse be included in OTP in R11 ? > > 2) Will the http-client be corrected in R11 to eliminate this problem? > > > > > > thanks, > > > > ~Michael > > -- Michael McDaniel Portland, Oregon, USA http://autosys.us +1 503 283 5284 From david.nospam.hopwood@REDACTED Tue Mar 14 18:02:55 2006 From: david.nospam.hopwood@REDACTED (David Hopwood) Date: Tue, 14 Mar 2006 17:02:55 +0000 Subject: the Hopwood design process In-Reply-To: References: <20060314121050.50691.qmail@web36704.mail.mud.yahoo.com> Message-ID: <4416F73F.3080908@blueyonder.co.uk> chandru wrote: > On 14/03/06, Thomas Lindgren wrote: > >>Anyway, let me add two things to the new features >>discussion. >> >>First, pragmatically speaking, it might be useful to >>release some features as "experimental", with the >>explicit proviso that they can be modified or perhaps >>even removed in the future, until classified as >>"stable". This allows for community feedback without >>locking into a given design beforehand. > > This is quite easily done when new libraries are being introduced > which do not need any special support from the runtime system. But > when you are talking about a new language feature such as this I guess > it becomes very hard because you've got to change the compiler, beam, > debugger and heavens knows what else - all in one go. To do that in a > non commital way can be very hard? In the E language, experimental features are enabled or disabled using pragmas. For some examples see . -- David Hopwood From ryanobjc@REDACTED Tue Mar 14 21:04:28 2006 From: ryanobjc@REDACTED (Ryan Rawson) Date: Tue, 14 Mar 2006 12:04:28 -0800 Subject: JMS In-Reply-To: References: Message-ID: <78568af10603141204n1fb85065od3e57fb91183c280@mail.gmail.com> JMS is an API actually - it doesn't really provide much in the way of anything in terms of implementing or working code straight up. Vendors like tibco (with their rendevouz product) and others supply an implementation (usually with some kind of JNI) that allow you to connect Java to those things. I've used tibco for a number of years, and at our site it's ended up not being so great. The best thing about it, and products like it, is you get machine-independent addressing. As in you can say "send a message to this subject" but you don't need to know who or where is listening. The decoupling sounds like a good idea, but in practice you need to know who is consuming your messages anyways. I would use JMS for a smaller enterprise, with < 1000 nodes. -ryan On 3/14/06, chandru wrote: > One of our IT systems wanted to talk to one of my erlang systems using > JMS. So I thought I can implement the wire protocol of JMS and got > it's spec from the Sun website. Here is an extract. > > > 1.1 Abstract > JMS provides a common way for Java programs to create, send, receive > and read an enterprise messaging system's messages. > > 1.2.4 What JMS Does Not Include > Load Balancing/Fault Tolerance > Error/Advisory Notification > Administration > Security > Wire Protocol > Message Type Repository > > > > Now, what is the point of a f*ng messaging system if the wire > protocol, load balancing, fault tolerance aren't specified. And then > users moan that different vendor implementations of JMS do not > interoperate! Duh! > > Sorry about the rant. I'm working from home today and I had to vent my > frustration somehow :-) > From chandrashekhar.mullaparthi@REDACTED Tue Mar 14 21:16:05 2006 From: chandrashekhar.mullaparthi@REDACTED (chandru) Date: Tue, 14 Mar 2006 20:16:05 +0000 Subject: JMS In-Reply-To: <78568af10603141204n1fb85065od3e57fb91183c280@mail.gmail.com> References: <78568af10603141204n1fb85065od3e57fb91183c280@mail.gmail.com> Message-ID: On 14/03/06, Ryan Rawson wrote: > JMS is an API actually - it doesn't really provide much in the way of > anything in terms of implementing or working code straight up. I realised that after reading the JMS spec. As with most things about Java, it doens't quite live up to it's name. > Vendors like tibco (with their rendevouz product) and others supply an > implementation (usually with some kind of JNI) that allow you to > connect Java to those things. Yep - we have that beast in our network. Not quite sure how useful it is - we seem to add one "bus" after another for communication and still pretty much achieving the same thing. Chandru From ok@REDACTED Tue Mar 14 23:45:12 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Wed, 15 Mar 2006 11:45:12 +1300 (NZDT) Subject: the Hopwood design process Message-ID: <200603142245.k2EMjC5W355770@atlas.otago.ac.nz> Second, and more generally, I think new features would fare better if we had some language principles to tell us what new features strengthen the language, and which diffuse it. In this case it's an easy decision: the preprocessor is violently at odds with everything else in the language. One of the papers I wrote for SERC had the title "Delenda Est Preprocessor". There really isn't anything that can be done with the processor that could not be done better without it. In particular, one of the major things about Erlang is the module system combined with hot loading, but the preprocessor subverts the module system and causes dependencies between source units that are not and cannot be tracked by the run time system. This DOESN'T mean that abstract patterns are right, but it DOES mean that records done via the preprocessor are wrong. In particular, it doesn't mean that record *syntax* is wrong (although I believe it muddles up three different things that would be better separated, so that some of those things could be more widely available), it means that *accessing* record definitions via the preprocessor is the wrong way to do it. I was rather against funs when they first appeared; I was in the middle of writing a paper about how one would infer certain behavioural properties of functions in Erlang, and with the introduction of funs you just *couldn't* any more. But by this criterion they are a good thing: they make higher order functions easier to define and use and thereby make code easier to get right first time and to maintain thereafter, and they are certainly in keeping with Erlang's functional spirit. List comprehensions are not quite as clear a case as funs. To this day SML manages without them. On the other hand, whenever I have to write SML I am greatly annoyed by their absence. List comprehensions are local, concise, and intention-revealing. In another thread we've been arguing about a notation which, in the form ( Result_Expression where P1 = Init1 then Step1, ..., Pn = Initn then Stepn || generators and filters ), I have come to think _might_ fit Erlang quite nicely. Again, it is local, concise, and intention-revealing, and as the overall structure ... || is similar to the structure of list comprehensions, it at the very least doesn't *conflict* with the current language. And it has the very nice (E0 where P1 = E1, ..., Pn = En) local binding form as a special case, and even (E0) as a special case! I now regard it as settled that this would strengthen Erlang and not diffuse it, but the *practical* question "would it strengthen Erlang ENOUGH to be worth doing" remains. For example, I looked in the sys_core_dsetel module. The first thing I saw that looked like a candidate for this new form was bind_vars/2, but it had loop dependencies that made the new form inapplicable. The second thing that looked like a candidate for this new form was visit_pats/2, and that one worked. Until I realised that visit_pat/3 called back to the visit_pats/3 function I had just eliminated. The first thing where the new form *could* be used was restore_vars([V|Vs], Env0, Env1) -> case dict:find(V, Env0) of {ok, N} -> restore_vars(Vs, Env0, dict:store(V, N, Env1)); error -> restore_vars(Vs, Env0, dict:erase(V, Env1)) end; restore_vars([], _, Env1) -> Env1. which becomes restore_vars(Vs, Env0, Env1) -> (Env where Env = Env1 then case dict:find(V, Env0) of {ok,N} -> dict:store(V, N, Env) ; error -> dict:erase(V, Env) end || V <- Vs). Given that the original code could have been written as restore_vars([], _, Env1) -> Env1; restore_vars([V|Vs], Env0, Env1) -> restore_vars(Vs, Env0, case dict:find(V, Env0) of {ok,N} -> dict:store(V, N, Env1) ; error -> dict:erase(V, Env1) end). or as restore_vars(Vs, Env0, Env1) -> lists:foldl(fun(V, Env) -> case dict:find(V, Env0) of {ok,N} -> dict:store(V, N, Env) ; error -> dict:erase(V, Env) end end, Env1, Vs). and that this was the only one of three apparent candidates where the new notation could in fact be used, there doesn't seem to be much of a payoff in that particular module. This is a case where I suggest that an empirical study of how often the new form could be used and what difference it would make to readability is needed. We can't always make a decision just on how well something fits. From kostis@REDACTED Wed Mar 15 00:53:48 2006 From: kostis@REDACTED (Kostis Sagonas) Date: Wed, 15 Mar 2006 00:53:48 +0100 (MET) Subject: Hot loading and preprocessor [was: Re: the Hopwood design process] In-Reply-To: Mail from '"Richard A. O'Keefe" ' dated: Wed, 15 Mar 2006 11:45:12 +1300 (NZDT) Message-ID: <200603142353.k2ENrmZe001630@spikklubban.it.uu.se> Richard A. O'Keefe wrote: > There really isn't anything that can be done with the processor that > could not be done better without it. Quite a strong statement, but probably true. > In particular, one of the major things about Erlang is the module > system combined with hot loading, but the preprocessor subverts > the module system and causes dependencies between source units > that are not and cannot be tracked by the run time system. I am not sure I get this part though. AFAIK, the Erlang run time system does not track any dependencies between modules anyway -- modules can be hot (re)loaded on an individual basis and you get "reload-and-pray" semantics when you do so and forget to update modules that these modules depend on. However, even if it did, dependencies could take preprocessing into account. For example, the compiler+loader could easily record information of the form "the code of all this set of modules has a dependency on these record declarations". Even today, this info is present when compiling with +debug_info. I certainly do not want to defend the preprocessor, but I do not really see how the preprocessor is to blame for what is being discussed here... How are records different than e.g. changing the types of data structures that some module obtains from some other module and manipulates? Cheers, Kostis From ok@REDACTED Wed Mar 15 04:57:00 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Wed, 15 Mar 2006 16:57:00 +1300 (NZDT) Subject: Hot loading and preprocessor [was: Re: the Hopwood design process] Message-ID: <200603150357.k2F3v080365417@atlas.otago.ac.nz> I wrote: > There really isn't anything that can be done with the processor > [should have been "preprocessor"] that > could not be done better without it. Kostis Sagonas replied: Quite a strong statement, but probably true. I have tried the exercise of rewriting a couple of thousand lines of OTP sources, about 4 years ago. It's true all right. I also said that > he preprocessor subverts > the module system and causes dependencies between source units > that are not and cannot be tracked by the run time system. I am not sure I get this part though. AFAIK, the Erlang run time system does not track any dependencies between modules anyway -- modules can be hot (re)loaded on an individual basis and you get "reload-and-pray" semantics when you do so and forget to update modules that these modules depend on. I didn't say that the existing system does track module dependencies. It could, and I think it definitely should. As a regular reader of comp.risks, I prefer "tentatively reload and check" to "reload and pray" *if* the checks can be reliable. But that relies on *having* modules, and having language constructs that respect modules. Let me briefly recapitulate another old proposal: -use_early(Module). means that this module depends on Module; calls with Module: as a module prefix should be accepted without comment; the dependency between this module and Module is so close that the compiler may inline definitions from Module; this module may be reloaded without necessarily reloading Module, but if Module is reloaded a corresponding version of this module should be loaded at the same time. -use_late(Module). means that this module depends on Module; calls with Module: as a module prefix should be accepted without comment; the compiler should assume that Module may be reloaded without this module being reloaded. (That doesn't preclude inlining, but it would seem to require on-the-fly recompilation of some kind.) -use_early(Module, [F1/N1, ..., Fk/Nk]). means the same as -use_early(Module) plus in addition any call to a function (or pattern) in Module other than one listed in a declaration like this should provoke a compiler warning. It is an error if Module does not define and export F1/N1 ... Fk/Nk, to be reported no later than the time that this module and Module are both loaded. -use_late(Module, [F1/N1, ..., Fk/Nk]). means the same as -use_late(Module) plus in addition any call to a function (or pattern) in Module other than one listed in a declaration like this should provoke a compiler warning. If Module does not define and export some Fi/Ni, that need not be reported until Module:Fi/Ni is called. -import_early(Module, [F1/N1, ..., Fk/Nk]). means the same as -use_early(Module, [F1/N1, ..., Fk/Nk]) plus in addition any function listed in a declaration like this may be used without a module prefix; it is an error if any Fi/Ni is declared in this module. -import_late(Module, [F1/N1, ..., Fk/Nk]). plus in addition any function listed in a declaration like this may be used without a module prefix; it is an error if any Fi/Ni is declared in this module. The modules of an application are often (though not always) reloaded as a unit, so it makes sense for modules in an application to be able to -use_early or -import_early each other. In fact, it even makes sense for the compiled modules of an application to be held in a single file, so that it is _easier_ to reload the whole thing than to reload just one file. (End of recapitulation.) However, even if it did, dependencies could take preprocessing into account. For example, the compiler+loader could easily record information of the form "the code of all this set of modules has a dependency on these record declarations". Even today, this info is present when compiling with +debug_info. Well, not quite. You see the preprocessor can create *negative* dependencies. With the declarations above, we can create a dependency that says module M1 relies on module M2 providing F/N. With the preprocessor, we can create dependencies that say file F1 relies on file F2 *not* defining ?X. More precisely, *use* such and such of file F1 relies on file F2 not defining ?X and it is possible for one use of F1 to rely on F2 HAVING ?X and another use of F1 in the same application to rely on F2 NOT having ?X. Also, the things that the dependencies relate are (uses of) files, not modules. Without the preprocessor, it's just modules. One reason this matters is that the module name space is under Erlang's control, but the file name space is not. I certainly do not want to defend the preprocessor, but I do not really see how the preprocessor is to blame for what is being discussed here... I'm not sure what you mean by "what is being discussed here". I hope I've explained in a bit more detail what the preprocessor does to dependencies. How are records different than e.g. changing the types of data structures that some module obtains from some other module and manipulates? This is like asking "how is a brick different from breaking an egg?" One is a thing, the other is an activity. The main issue is that Erlang is a language based on modules, and records are not based on and do not respect modules. If you obtain a data structure from some other module and manipulate it, *as long as you use the functions exported from that module for the purpose of such manipulation*, you should be OK. (Until the module is revised and reloaded, and even then abstract patterns and psi-terms can help a lot.) If you get a record from another module and manipulate it using record syntax, you *have* to bypass the module system to do so. Both this module and the other module must get the record definition from something that IS NOT A MODULE. (Or both modules could have their own declaration for the record type, which also counts as bypassing the module system.) Don't get me wrong. I *agree* that something *like* records are so very useful that Erlang had to have something *like* them. I even agree that the present approach to records is better than nothing at all, and was worth a try if nothing better could have been invented. The issue is whether we could have something *like* records that meshes well with the module system, or whether it's necessary to imitate C quite so slavishly. From ok@REDACTED Wed Mar 15 05:39:52 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Wed, 15 Mar 2006 17:39:52 +1300 (NZDT) Subject: Abstract patterns Message-ID: <200603150439.k2F4dqQQ366630@atlas.otago.ac.nz> Bjorn Gustavsson wrote: > What I meant by the phrase "implement efficiently" is that performance should be comparable to that of records. If that indeed would be possible to achive without inlining, we would be ^^^^^^^^^^^^^^^^ more interested in implementing abstract patterns. I think we are comparing apples with centipedes here. Let me show you a little table. Definition is Records are Abstract patterns are in same .erl file fast fast because inlined if inlined in an included file fast fast because inlined if inlined imported from INFINITELY SLOW as fast as ordinary another module you can't do it function calls. If you just took existing code and replaced -record declarations by the appropriate abstract patterns (with corresponding changes elsewhere, of course), you would expect the abstract pattern version to be the SAME speed as the record version because it should be pretty much the SAME code. Abstract patterns should be as fast as records except when they are INFINITELY FASTER, because records can't be used at all. Just how slow would abstract patterns be without inlining? I wrote a little test case: t1() -> Xs = data(), timer:tc(?MODULE, t1, [Xs,0]). t1([{foo,K,_,_}|Xs], N) -> t1(Xs, N+K); t1([], N) -> N. t2() -> Xs = data(), timer:tc(?MODULE, t2, [Xs,0]). t2([X|Xs], N) -> {K,_,_} = mod2:foo(X), t2(Xs, N+K); t2([], N) -> N. where in mod2, foo({foo,X,Y,Z}) -> {X,Y,Z}. t2() was 3.7 times slower than t1() [5.2 times slower if [native]]. This is pretty much a worst case comparison; even when compiled out-of-line there are better things for abstract patterns to do than *literally* build tuples. I wrote another test case, t3(), which avoided building a tuple and then matching it again. t3() was 1.2 times slower than t1() [2.1 times slower if [native]]. I would expect the kind of code one could generate, taking advantage of the special restricted structure of abstract patterns, to be much closer to the t3() time than the t2() time. But remember, even t2() was INFINITELY FASTER than the analogue with records, because there IS no analogue with records. From ke.han@REDACTED Wed Mar 15 06:53:37 2006 From: ke.han@REDACTED (ke han) Date: Wed, 15 Mar 2006 13:53:37 +0800 Subject: Abstract patterns In-Reply-To: <200603150439.k2F4dqQQ366630@atlas.otago.ac.nz> References: <200603150439.k2F4dqQQ366630@atlas.otago.ac.nz> Message-ID: Richard, I've just read your paper on abstract patterns (found it in-lined in a mailist post from a year or two ago). After a day or so of letting it sink in, I am coming around to your point of view on its usefulness in encapsulating term structure. The problems with records is obvious but the record update and matching syntax is much more readable than the update function of your abstract patterns (see your quote below). L2 = L#log{blocked_by = none, status = ok} which is arguably prettier than L2 = L#log_blocked_by(none)#log_status(ok) #log{status == S, users = N} = L which is arguably prettier than #log_status(S) = #log_users(N) = L So the question I have is, is this syntax as good as it gets? Is there any way to improve on this syntax? particularly the update and matching. thanks, ke han From ke.han@REDACTED Wed Mar 15 07:10:30 2006 From: ke.han@REDACTED (ke han) Date: Wed, 15 Mar 2006 14:10:30 +0800 Subject: is anyone using TextMate ? Message-ID: I have been using TextMate on my shiny new iMac-intel and have come to find TextMate a fantastic editor. Has anyone created a bundle for erlang syntax? Yes, the TextMate site has a repository for such things and erlang is on the wish list. I am hoping someone has a half-baked version they haven't published that I can hack. thanks, ke han From kostis@REDACTED Wed Mar 15 10:02:57 2006 From: kostis@REDACTED (Kostis Sagonas) Date: Wed, 15 Mar 2006 10:02:57 +0100 (MET) Subject: Dialyzer v1.4.0 Message-ID: <200603150902.k2F92vsx002172@spikklubban.it.uu.se> We are very proud to announce the release of Dialyzer v1.4.0, which can be downloaded from: http://www.it.uu.se/research/group/hipe/dialyzer/ Dialyzer v1.4.0 requires Erlang/OTP R10B-10. As a matter of fact, it also requires a small patch to one of the HiPE files in order to work properly. Instructions are given in the above homepage. Dialyzer v1.4.0 is significantly different (more effective and considerably faster) than previous releases. It contains changes made during a period of more than a year, so we strongly recommend to Dialyzer users to upgrade and to new users to try it out. Enjoy! Kostis Sagonas and Tobias Lindahl. PS. Most probably, this will be the last major release of Dialyzer from the site mentioned above. Feedback is welcome. From kostis@REDACTED Wed Mar 15 11:33:10 2006 From: kostis@REDACTED (Kostis Sagonas) Date: Wed, 15 Mar 2006 11:33:10 +0100 (MET) Subject: Hot loading and preprocessor [was: Re: the Hopwood design process] In-Reply-To: Mail from '"Richard A. O'Keefe" ' dated: Wed, 15 Mar 2006 16:57:00 +1300 (NZDT) Message-ID: <200603151033.k2FAXAqw024962@spikklubban.it.uu.se> I asked: > > How are records different than e.g. changing the types of data > > structures that some module obtains from some other module and > > manipulates? and Richard A. O'Keefe replied: > This is like asking "how is a brick different from breaking an egg?" > One is a thing, the other is an activity. Thanks for this clarification. Let me repeat my question in a way such that grammar will not stand on our way: Currently, records are just syntactic sugar for tuples. What exactly is it that makes records different from tuples -- or for that matter lists, or any other data structure -- which are shared between modules? Seems to me that the fundamental problem is not keeping track of data structure dependencies across modules, not whether records are implemented using the preprocessor or via some other mechanism. > The main issue is that Erlang is a language based on modules, > and records are not based on and do not respect modules. I agree with the first part: "records are not based on modules". The second part is totally meaningless though. There is nothing in records that makes them "respect modules" or for that matter "not respect modules". The issue is left totally up to how one programs. > If you obtain a data structure from some other module and > manipulate it, *as long as you use the functions exported from that > module for the purpose of such manipulation*, you should be OK. > (Until the module is revised and reloaded, and even then abstract > patterns and psi-terms can help a lot.) > > If you get a record from another module and manipulate it using > record syntax, you *have* to bypass the module system to do so. > Both this module and the other module must get the record > definition from something that IS NOT A MODULE. I do not really see the *have* part, but even if true, my reaction to this is: So what? > (Or both modules could have their own declaration for the record > type, which also counts as bypassing the module system.) What exactly is wrong with the following way of coding? File "rec.hrl" defines the record and defines wrappers for its accessor functions using the preprocessor. It is shared between modules, as it should, because this is what header files are about. Module "rec" defines the functions that create and update records. Module "foo" calls the "rec" module to create a record and then uses the preprocessor twice (once in the define and once in the #rec.b). %------------------------ rec.hrl ----------------- -record(rec, {a,b}). -define(rec__a(R), R#rec.a). -define(rec__b(R), R#rec.b). %------------------------ rec.erl ----------------- -module(rec). -export([new/0, update_a/2, update_b/2]). -include("rec.hrl"). new() -> #rec{}. update_a(R, Val) -> R#rec{a = Val}. update_b(R, Val) -> R#rec{b = Val}. %------------------------ m.erl ------------------- -module(m). -export([foo/0]). -include("rec.hrl"). foo() -> R = rec:update_b(rec:new(), 42), ?rec__b(R). %-------------------------------------------------- I specifically used the preprocessor twice in the last line of foo/0 to illustrate the point that the preprocessor is really not to blame for what is being discussed here -- as a matter of fact, it comes in handy because it allows the accessor functions to be shared between modules. (Although they could just as easily be part of "rec.erl".) The fact that there is no mechanism to keep track of data structure dependencies between modules and that modules can be reloaded on an individual basis is really orthogonal to the record vs. abstract patterns discussion and how these are implemented. In particular, the preprocessor has absolutely nothing to do with it. This is my point. Cheers, Kostis From anders.nygren@REDACTED Wed Mar 15 22:26:55 2006 From: anders.nygren@REDACTED (anders) Date: Wed, 15 Mar 2006 15:26:55 -0600 Subject: erlmerge patch Message-ID: <200603151526.56002.anders.nygren@telteq.com.mx> I tried erlmerge and found that it a, lists packages in reverse order b, the timeout for downloading packages is only 5 seconds, which was not enough for me to download a package from trapexit. The enclosed patch fixes these problems. /Anders diff -r erlmerge-0.6/src/erlmerge.erl /usr/local/src/erlmerge-0.6/src/erlmerge.erl 127c127 < print(lists:keysort(#app.name, L)); --- > print(lists:reverse(lists:keysort(#app.name, L))); 525c525 < Timeout = 5000, --- > Timeout = 30000, From anders.nygren@REDACTED Wed Mar 15 18:10:14 2006 From: anders.nygren@REDACTED (anders) Date: Wed, 15 Mar 2006 11:10:14 -0600 Subject: erlmerge patch Message-ID: <200603151110.15329.anders.nygren@telteq.com.mx> I tried erlmerge and found that it a, lists packages in reverse order b, the timeout for downloading packages is only 5 seconds The enclosed patch fixes these problems. /Anders diff -r erlmerge-0.6/src/erlmerge.erl /usr/local/src/erlmerge-0.6/src/erlmerge.erl 127c127 < print(lists:keysort(#app.name, L)); --- > print(lists:reverse(lists:keysort(#app.name, L))); 525c525 < Timeout = 5000, --- > Timeout = 30000, From ft@REDACTED Thu Mar 16 10:05:20 2006 From: ft@REDACTED (Fredrik Thulin) Date: Thu, 16 Mar 2006 10:05:20 +0100 Subject: distributed erlang failure reason Message-ID: <200603161005.20995.ft@it.su.se> Hi How do I determine _why_ I could not connect to a remote node? I want to make my command line utilities capable of informing the user that the reason they can't get status information from my running nodes is that they have the wrong cookie for example. I tried (ctl@REDACTED)1> net_kernel:monitor_nodes(true, [nodedown_reason]). ok (ctl@REDACTED)2> erlang:set_cookie(node(), 'foo'). true (ctl@REDACTED)3> net_kernel:connect_node('incomingproxy@REDACTED'). false (ctl@REDACTED)4> flush(). ok (ctl@REDACTED)5> but as you can see, no nodedown message was received, and the result of net_kernel:connect_node/1 was a simple 'false'. Likewise, on the node 'incomingproxy@REDACTED', I had set up monitor_nodes and got nothing although I did get a =ERROR REPORT==== 16-Mar-2006::09:51:32 === ** Connection attempt from disallowed node 'ctl@REDACTED' ** printed on the console. Also, that error message does not contain enough detail as it makes no difference if the reason is the wrong cookie, or that the node was not in the list of allowed nodes set using net_kernel:allow/1. /Fredrik From olivier@REDACTED Thu Mar 16 11:57:42 2006 From: olivier@REDACTED (olivier) Date: Thu, 16 Mar 2006 11:57:42 +0100 Subject: [Mnesia] Can't create a table named "serial" Message-ID: <441944A6.8030102@dolphian.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, I've found an interesting bug|feature in Mnesia: if you create a table named "serial", it will crash at the next restart. Please take a look at the attached files for a demo. It seems that the atom 'serial' has a special meaning in Mnesia. Thanks, - -- Olivier Girondel -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFEGUSmpqVXaJzJYNIRAgSyAJ49m0HhzjaGX3AJGOhuiF2jcK7hvwCeOBFc ctpeT1UGoHIv50WvXtHHmOc= =SYcr -----END PGP SIGNATURE----- -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: crash.erl URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: crash.sh Type: application/x-shellscript Size: 913 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: crash.out URL: From xlcr@REDACTED Thu Mar 16 14:03:45 2006 From: xlcr@REDACTED (Nick Linker) Date: Thu, 16 Mar 2006 19:03:45 +0600 Subject: Erlang/OTP R10B-10 has been released In-Reply-To: References: <44113351.5040207@mail.ru> Message-ID: <44196231.3060308@mail.ru> Thank you for the answer! Bjorn Gustavsson wrote: >Abstract patterns is tricky to implement efficiently as it would >require inlining across module boundaries. Inlining across modules >boundaries with retained semantics for code change is possible to >implement, but tricky. Currently, we have no plans to implement >abstract patterns. > >/Bjorn > Best regards, Linker Nick. From emil.oberg@REDACTED Thu Mar 16 17:52:01 2006 From: emil.oberg@REDACTED (=?iso-8859-1?Q?Emil_=D6berg_=28LN/EAB=29?=) Date: Thu, 16 Mar 2006 17:52:01 +0100 Subject: Iteration over lists Message-ID: <94B96B3383630441B365F7649034034802CBD99B@esealmw103.eemea.ericsson.se> Hi. Sitting here doing some profiling on our system and started to wonder about the performance of recursion/map/list comprehensions. Made a small test program just to compare them and got some (to me at least) surprising results. Lists:map() is nearly twice as slow as recursion, even when list is reversed, and list comprehension is even slower! Could anyone explain this? It is kind of disturbing to be forced to use messy recursion just because your code is time-critical. List rev mapping list elements recursion recursion mapping w. Fun comprh. 5 0.0690 0.0770 0.0940 0.0830 0.0820 10 0.1200 0.1320 0.1600 0.1530 0.1490 50 0.5240 0.5450 0.7540 0.8900 0.7660 100 1.1110 1.0490 1.4800 1.4880 1.5800 200 2.5330 2.1180 4.2030 5.7090 2.9500 500 5.5410 6.6350 9.5430 9.0360 9.4000 700 9.0160 7.2820 12.7840 12.1180 13.9340 1000 12.2220 12.4670 15.7670 20.7170 18.0760 2000 25.1110 26.1110 31.2840 36.3090 44.8230 3000 40.2820 42.2100 66.4730 66.8280 89.7050 4000 59.8470 56.5540 102.6200 102.7070 133.7110 5000 81.1000 85.1060 139.0810 138.8840 179.1730 6000 98.4590 103.7620 176.6980 175.7730 222.7360 What I measured was (list genereated by lists:seq(1,N)): recursion([H | T], Acc) -> recursion(T, [integer_to_list(H) | Acc]); recursion([], Acc) -> Acc. revrecursion([H | T], Acc) -> revrecursion(T, [integer_to_list(H) | Acc]); revrecursion([], Acc) -> lists:reverse(Acc). mapfun(L) -> lists:map(fun erlang:integer_to_list/1, L). mapping(L) -> lists:map({erlang, integer_to_list}, L). listcompr(L) -> [erlang:integer_to_list(X) || X <- L]. /Emil From jason.walton@REDACTED Thu Mar 16 18:40:02 2006 From: jason.walton@REDACTED (Jason Walton) Date: Thu, 16 Mar 2006 12:40:02 -0500 Subject: Error in BER encoding of Megaco messages? What am I missing? Message-ID: <4419A2F2.4040103@alcatel.com> I'm new to the world of H.248, so this is probably just me missing something, but it looks to me like the erlang BER encoder is writing strange values for package IDs. I was looking at the example messages provided from the H.248 Erlang encoding/decoding performance comparison (http://www.erlang.org/project/megaco/encoding_comparison-v4/encoded_messages). Specifically, I was looking at ber/msg08a.bin. The text version of the message (pretty/msg08a.txt) contains an events descriptor, containing events "al/on" and "dd/ce", and a signals descriptor, containing the signal "cg/rt". When I look at the BER encoded message (with a program using asn1c, an open source C-based ASN.1 compiler), however, I'm seeing events with pkgdName 0x0009/0x0004, 0x0004/0x0004, and a signal with pkgdName 0x0005/0x0031. In all cases, the second half of the pkgdName is what I would expect it to be, but the (with the exception of al/on) the pacakgeID (the first half) looks wrong to me. Take the "dd/ce" to start with. The packageID for dd is 0x0006. The packageID in the BER encoded message is 0x0004, which is tonedet (dd extends tonedet). According to RFC 3525 6.2.3, I should be able to refer to events in a base package using the extended package name (so I should be able to reference dd/tl or tonedet/tl, for example), but it makes no reference of doing the reverse; accessing an event defined in one package using the name of the base package. The Signal "cg/rt" is right out; 0x0005 is the package ID for dg, not cg, and dg doesn't define an event 0x0031. The only common link I can find between these two packages is that they both extend tonegen. What's going on here? Is the Erlang Megaco stack horribly broken, or (more likely) am I just completely missing something here? From ken@REDACTED Fri Mar 17 04:22:54 2006 From: ken@REDACTED (Kenneth Johansson) Date: Fri, 17 Mar 2006 04:22:54 +0100 Subject: The Computer Language Shootout Message-ID: <1142565775.11970.10.camel@tiger> http://shootout.alioth.debian.org/gp4/benchmark.php?test=knucleotide&lang=all I did an implementation in erlang for the knucleotide. And while the code is much shorter than C and fortran it's larger than ruby and python. But since this is my first real try at erlang I'm sure someone here can do significant improvement. Also the speed is a problem it's on my computer 8 times slower than the python version. -------------- next part -------------- -module(knucleotide). -export([main/0]). %% turn characters a..z to uppercase and strip out any newline to_upper_no_nl(Str) -> to_upper_no_nl(Str, []). to_upper_no_nl([C|Cs], Acc) when C >= $a, C =< $z -> to_upper_no_nl(Cs, [C-($a-$A)| Acc]); to_upper_no_nl([C|Cs], Acc) when C == $\n -> to_upper_no_nl(Cs, Acc); to_upper_no_nl([C|Cs], Acc) -> to_upper_no_nl(Cs, [C | Acc]); to_upper_no_nl([], Acc) -> lists:reverse(Acc). % Read in lines from stdin and discard them until a line starting with % >THREE are reached. seek_three() -> Line = io:get_line(''), case string:str(Line,">THREE Homo sapiens frequency") of 0 -> seek_three(); _ -> i_dont_care end. %% Read in lines from stdin until eof. %% Lines are converted to upper case and put into a single list. dna_seq( Seq ) -> case io:get_line('') of eof -> Seq; Line -> Uline = to_upper_no_nl(Line), dna_seq(Seq ++ Uline) end. dna_seq() -> seek_three(), dna_seq([]). %% Create a dictinary with the dna sequence as key and the number of times %% it was in the original sequence as value. %% Len is the number of basepairs to use as the key. gen_freq(Dna, Len) -> gen_freq(Dna, Len, dict:new(),0,length(Dna)). gen_freq([], _, Frequency, Acc, _) -> {Frequency,Acc}; gen_freq(Dna, Len, Frequency, Acc, Dec) when Dec >= Len -> {Key,_} = lists:split(Len, Dna), Freq = dict:update_counter(Key, 1, Frequency), [_ | T]=Dna, gen_freq(T, Len, Freq, Acc +1, Dec -1); gen_freq(_, _, Frequency, Acc, _) -> {Frequency,Acc}. %% Print the frequency table printf({Frequency, Tot}) -> printf(lists:reverse(lists:keysort(2,dict:to_list(Frequency))),Tot). printf([],_) -> io:fwrite("\n"); printf([H |T],Tot)-> {Nucleoid,Cnt}=H, io:fwrite("~s ~.3f\n",[Nucleoid,(Cnt*100.0)/Tot]), printf(T,Tot). write_count(Dna, Pattern) -> { Freq ,_} = gen_freq(Dna, length(Pattern)), case dict:find(Pattern,Freq) of {ok,Value} -> io:fwrite("~w\t~s\n",[Value,Pattern]); error -> io:fwrite("~w\t~s\n",[0,Pattern]) end. main() -> Seq =dna_seq(), printf(gen_freq(Seq,1)), printf(gen_freq(Seq,2)), write_count(Seq,"GGT"), write_count(Seq,"GGTA"), write_count(Seq,"GGTATT"), write_count(Seq,"GGTATTTTAATT"), write_count(Seq,"GGTATTTTAATTTATAGT"), halt(0). From info@REDACTED Thu Mar 16 20:25:56 2006 From: info@REDACTED (Power Tools HQ) Date: Thu, 16 Mar 2006 19:25:56 -0000 Subject: Your Power Tools Website Message-ID: <37ac63139817d2c21b7c5507782b82@trevor> Hi, I manage a website called power-tools-hq.com and I think your site would be of interest to the visitors that regularly browse my site. I have gone ahead and given you a link plus a description of your site from my page at http://power-tools-hq.com/echopowertools and I'm just contacting you to check it is ok to have done this for you? I would greatly appreciate a link back to my site and if you are happy to do this then to make it easy for you I have included the following code... Power Tools HQ Everything about power tools and choosing the best power tool for your job. Feel free to change the suggested code if you would like to. I look forward to a mutually beneficial link partnership and I wish you all the best with your site for the future. Please let me know if there is anything else I can do for you. Kind regards Trevor P.S. Keep up the good work! Disclaimer: If this email has reached you in error or if you would not like to be contacted again then please accept my sincere apologies. Let me know by sending an email to remove@REDACTED if this is the case and I will make sure power-tools-hq.com never contacts you again. // From jason.walton@REDACTED Thu Mar 16 17:06:54 2006 From: jason.walton@REDACTED (Jason Walton) Date: Thu, 16 Mar 2006 11:06:54 -0500 Subject: Error in BER encoding of Megaco messages? What am I missing? Message-ID: <44198D1E.5000000@alcatel.com> I'm new to the world of H.248, so this is probably just me missing something, but it looks to me like the BER I was looking at the example messages provided from the H.248 Erlang encoding/decoding performance comparison (http://www.erlang.org/project/megaco/encoding_comparison-v4/encoded_messages). Specifically, I was looking at ber/msg08a.bin. The text version of the message (pretty/msg08a.txt) contains an events descriptor, containing events "al/on" and "dd/ce", and a signals descriptor, containing the signal "cg/rt". When I look at the BER encoded message (with a program using asn1c, an open source C-based ASN.1 compiler), however, I'm seeing events with pkgdName 0x0009/0x0004, 0x0004/0x0004, and a signal with pkgdName 0x0005/0x0031. In all cases, the second half of the pkgdName is what I would expect it to be, but the (with the exception of al/on) the pacakgeID (the first half) looks wrong to me. Take the "dd/ce" to start with. The packageID for dd is 0x0006. The packageID in the BER encoded message is 0x0004, which is tonedet (dd extends tonedet). According to RFC 3525 6.2.3, I should be able to refer to events in a base package using the extended package name (so I should be able to reference dd/tl or tonedet/tl, for example), but it makes no reference of doing the reverse; accessing an event defined in one package using the name of the base package. The Signal "cg/rt" is right out; 0x0005 is the package ID for dg, not cg, and dg doesn't define an event 0x0031. The only common link I can find between these two packages is that they both extend tonegen. What's going on here? Is the Erlang Megaco stack horribly broken, or (more likely) am I just completely missing something here? From bengt.kleberg@REDACTED Fri Mar 17 09:49:24 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Fri, 17 Mar 2006 09:49:24 +0100 Subject: Iteration over lists In-Reply-To: <94B96B3383630441B365F7649034034802CBD99B@esealmw103.eemea.ericsson.se> References: <94B96B3383630441B365F7649034034802CBD99B@esealmw103.eemea.ericsson.se> Message-ID: <441A7814.8090702@ericsson.com> On 2006-03-16 17:52, Emil ?berg (LN/EAB) wrote: ...deelted > mapping(L) -> > lists:map({erlang, integer_to_list}, L). i do not think that this use of map/2 is documented. perhaps it is unwise to use it? bengt From xlcr@REDACTED Fri Mar 17 10:30:47 2006 From: xlcr@REDACTED (Nick Linker) Date: Fri, 17 Mar 2006 15:30:47 +0600 Subject: Iteration over lists In-Reply-To: <441A7814.8090702@ericsson.com> References: <94B96B3383630441B365F7649034034802CBD99B@esealmw103.eemea.ericsson.se> <441A7814.8090702@ericsson.com> Message-ID: <441A81C7.9060008@mail.ru> Bengt Kleberg wrote: > On 2006-03-16 17:52, Emil ?berg (LN/EAB) wrote: > ....deelted > >> mapping(L) -> >> lists:map({erlang, integer_to_list}, L). > > > i do not think that this use of map/2 is documented. perhaps it is > unwise to use it? May be implicit apply/2 call takes part here? The definition of map/2 is straightforward: map(F, [H|T]) -> [F(H)|map(F, T)]; map(F, []) when is_function(F, 1) -> []. Best regards, Linker Nick. From raimo@REDACTED Fri Mar 17 10:37:35 2006 From: raimo@REDACTED (Raimo Niskanen) Date: 17 Mar 2006 10:37:35 +0100 Subject: Your Power Tools Website References: <37ac63139817d2c21b7c5507782b82@trevor> Message-ID: Sorry about that spam. I do accidentally approved it. Please ignore! info@REDACTED (Power Tools HQ) writes: > Hi, > > I manage a website called power-tools-hq.com and I think > your site would be of interest to the visitors that regularly > browse my site. > > I have gone ahead and given you a link plus a description of > your site from my page at > http://power-tools-hq.com/echopowertools > and I'm just contacting you to check it is ok to have done this > for you? > > I would greatly appreciate a link back to my site and if you > are happy to do this then to make it easy for you I have > included the following code... > > Power Tools HQ > Everything about power tools and choosing the best power tool > for your job. > > Feel free to change the suggested code if you would like to. > > I look forward to a mutually beneficial link partnership and I > wish you all the best with your site for the future. Please let > me know if there is anything else I can do for you. > > Kind regards > > > Trevor > > P.S. Keep up the good work! > > Disclaimer: If this email has reached you in error or if you > would not like to be contacted again then please accept my > sincere apologies. Let me know by sending an email to > remove@REDACTED if this is the case and I will make > sure power-tools-hq.com never contacts you again. > > > // -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From ulf.wiger@REDACTED Fri Mar 17 10:57:41 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Fri, 17 Mar 2006 10:57:41 +0100 Subject: The Computer Language Shootout Message-ID: One obvious first optimization would be to not use string:str(Line, ">THREE Homo sapiens frequency"), but rather to check whether the line actually starts with ">THREE", thus: seek_three() -> case io:get_line('') of ">THREE" ++ _ -> found; eof -> erlang:error(eof); _ -> seek_three() end. This tiny change alone gives a 18% speedup on my machine. Regards, Ulf W > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of > Kenneth Johansson > Sent: den 17 mars 2006 04:23 > To: erlang-questions@REDACTED > Subject: The Computer Language Shootout > > http://shootout.alioth.debian.org/gp4/benchmark.php?test=knucl > eotide&lang=all > > I did an implementation in erlang for the knucleotide. > > And while the code is much shorter than C and fortran it's > larger than ruby and python. But since this is my first real > try at erlang I'm sure someone here can do significant improvement. > > Also the speed is a problem it's on my computer 8 times > slower than the python version. > > From bjorn@REDACTED Fri Mar 17 11:21:29 2006 From: bjorn@REDACTED (Bjorn Gustavsson) Date: 17 Mar 2006 11:21:29 +0100 Subject: Iteration over lists In-Reply-To: <94B96B3383630441B365F7649034034802CBD99B@esealmw103.eemea.ericsson.se> References: <94B96B3383630441B365F7649034034802CBD99B@esealmw103.eemea.ericsson.se> Message-ID: Your benchmark is not fair. What you call recursion is in fact iteration expressed as tail-recursion. Naturally it is faster. Real recursion would look this: recursion([H | T]) -> [integer_to_list(H) | recursion(T)]; recursion([]) -> []. A list comprehension will in fact be translated to the exact same code a the function above. I did my own measurements, comparing my revised recursion/1 function to your other functions (except mapping/1). I used my own framework for benchmarking, found at my home page (http://www.erlang.se/~bjorn). The actual code can be found at the end of this email. I used R10B-10. Name Relative time ---- ------------- recursion 1.00 listcompr 1.01 revrecursion 1.12 mapfun 1.14 My framework is careful to run each new benchmark in fresh process. Did you run each test in a new process? If not, differences in heap sizes will seriously bias the results. As can been seen, recursion and list comprehension are about the same. (Since the generated code is the same, the differences are due to random fluctuations; actually, sometimes they both ended up at 1.0.) Using tail-recursion and then reversing is somewhat slower than either recursion or list comprehensions. My list size was 6000. I expect that if the lists are really huge, revrecursion will win. Using lists:map/2 and a fun is slightly slower than recursing directly because of the overhead of calling the fun. Note that in older versions of OTP, list comprenhensions used to be implemented in terms of funs. That is no longer the case. Also, funs were much slower in older versions of OTP. Conclusions: 1) If you can get away with producing a list with elements reversed, an iterative function using tail-recursion is fastest. 2) Otherwise, if your lists are up to say 10000 elements, use a list comprehension and be happy. :-) 3) If your lists are larger than that, you should do some measurements and see if maybe tail-recursion followed by a lists:reverse/1 might be faster. /Bjorn Emil ?berg (LN/EAB) writes: > Hi. > > Sitting here doing some profiling on our system and started to wonder about the performance of recursion/map/list comprehensions. Made a small test program just to compare them and got some (to me at least) surprising results. Lists:map() is nearly twice as slow as recursion, even when list is reversed, and list comprehension is even slower! Could anyone explain this? It is kind of disturbing to be forced to use messy recursion just because your code is time-critical. > -- Bj?rn Gustavsson, Erlang/OTP, Ericsson AB -module(listify_bm). -export([benchmarks/0, recursion/1, revrecursion/1, mapfun/1, listcompr/1]). benchmarks() -> {1500, [recursion, revrecursion, mapfun, listcompr]}. recursion(Iter) -> recursion_1(Iter, make_list()). recursion_1(0, _) -> ok; recursion_1(Iter, L) -> recursion_2(L), recursion_1(Iter-1, L). recursion_2([H | T]) -> [integer_to_list(H) | recursion_2(T)]; recursion_2([]) -> []. revrecursion(Iter) -> revrecursion_1(Iter, make_list()). revrecursion_1(0, _) -> ok; revrecursion_1(Iter, L) -> revrecursion_2(L, []), revrecursion_1(Iter-1, L). revrecursion_2([H | T], Acc) -> revrecursion_2(T, [integer_to_list(H) | Acc]); revrecursion_2([], Acc) -> lists:reverse(Acc). mapfun(Iter) -> mapfun_1(Iter, make_list()). mapfun_1(0, _) -> ok; mapfun_1(Iter, L) -> mapfun_2(L), mapfun_1(Iter-1, L). mapfun_2(L) -> lists:map(fun erlang:integer_to_list/1, L). listcompr(Iter) -> listcompr_1(Iter, make_list()). listcompr_1(0, _) -> ok; listcompr_1(Iter, L) -> listcompr_2(L), listcompr_1(Iter-1, L). listcompr_2(L) -> [erlang:integer_to_list(X) || X <- L]. make_list() -> lists:seq(1, 6000). From chandrashekhar.mullaparthi@REDACTED Fri Mar 17 11:30:10 2006 From: chandrashekhar.mullaparthi@REDACTED (chandru) Date: Fri, 17 Mar 2006 10:30:10 +0000 Subject: ibrowse bug Message-ID: Hi, I discovered a bug in ibrowse when handling the "100 Continue" response from a server. A patch has been applied to the ibrowse_http_client.erl in jungerl. A diff is shown below. cheers Chandru Chandrus-Mac:~/jungerl_root/jungerl/lib/ibrowse/src chandru$ cvs diff -r1.4 -r1.5 ibrowse_http_client.erl Index: ibrowse_http_client.erl =================================================================== RCS file: /cvsroot/jungerl/jungerl/lib/ibrowse/src/ibrowse_http_client.erl,v retrieving revision 1.4 retrieving revision 1.5 diff -r1.4 -r1.5 9c9 < -vsn('$Id: ibrowse_http_client.erl,v 1.4 2005/12/08 12:05:07 chandrusf Exp $ '). --- > -vsn('$Id: ibrowse_http_client.erl,v 1.5 2006/03/17 10:05:18 chandrusf Exp $ '). 665c665,666 < parse_response(Data_1, State_1); --- > parse_response(Data_1, State_1#state{recvd_headers = [], > status = get_header}); From bmk@REDACTED Fri Mar 17 11:35:23 2006 From: bmk@REDACTED (Micael Karlberg) Date: Fri, 17 Mar 2006 11:35:23 +0100 Subject: Error in BER encoding of Megaco messages? What am I missing? In-Reply-To: <4419A2F2.4040103@alcatel.com> References: <4419A2F2.4040103@alcatel.com> Message-ID: <441A90EB.6080909@erix.ericsson.se> Hi, I think that the binary (ber) messages included in the measurement is broken. A couple of versions back I found a bug in the name transator module(s) that did exactly what you describe. But, when I fixed the bug I forgot to regenerate the ber messages. Try to regenerate the ber messages from the text messages (done with the transform module). /BMK Jason Walton wrote: > I'm new to the world of H.248, so this is probably just me missing > something, but it looks to me like the erlang BER encoder is writing > strange values for package IDs. > > I was looking at the example messages provided from the H.248 Erlang > encoding/decoding performance comparison > (http://www.erlang.org/project/megaco/encoding_comparison-v4/encoded_messages). > > Specifically, I was looking at ber/msg08a.bin. The text version of the > message (pretty/msg08a.txt) contains an events descriptor, containing > events "al/on" and "dd/ce", and a signals descriptor, containing the > signal "cg/rt". > > When I look at the BER encoded message (with a program using asn1c, an > open source C-based ASN.1 compiler), however, I'm seeing events with > pkgdName 0x0009/0x0004, 0x0004/0x0004, and a signal with pkgdName > 0x0005/0x0031. In all cases, the second half of the pkgdName is what I > would expect it to be, but the (with the exception of al/on) the > pacakgeID (the first half) looks wrong to me. > > Take the "dd/ce" to start with. The packageID for dd is 0x0006. The > packageID in the BER encoded message is 0x0004, which is tonedet (dd > extends tonedet). According to RFC 3525 6.2.3, I should be able to > refer to events in a base package using the extended package name (so I > should be able to reference dd/tl or tonedet/tl, for example), but it > makes no reference of doing the reverse; accessing an event defined in > one package using the name of the base package. > > The Signal "cg/rt" is right out; 0x0005 is the package ID for dg, not > cg, and dg doesn't define an event 0x0031. The only common link I can > find between these two packages is that they both extend tonegen. > > What's going on here? Is the Erlang Megaco stack horribly broken, or > (more likely) am I just completely missing something here? > > > From vlad_dumitrescu@REDACTED Fri Mar 17 11:48:59 2006 From: vlad_dumitrescu@REDACTED (Vlad Dumitrescu) Date: Fri, 17 Mar 2006 11:48:59 +0100 Subject: The Computer Language Shootout In-Reply-To: <1142565775.11970.10.camel@tiger> Message-ID: Hi, Another improvement can be obtained with dna_seq( Seq ) -> case io:get_line('') of eof -> lists:flatten(lists:reverse(Seq)); Line -> Uline = to_upper_no_nl(Line), dna_seq([Uline|Seq]) end. Also, an even better result (~half the time) can be obtained by using binaries instead of lists: dna_seq( Seq ) -> case io:get_line('') of eof -> list_to_binary(lists:reverse(Seq)); Line -> Uline = to_upper_no_nl(Line), dna_seq([Uline|Seq]) end. gen_freq(Dna, Len) -> gen_freq(Dna, Len, dict:new(),0,size(Dna)). gen_freq(<<>>, _, Frequency, Acc, _) -> {Frequency,Acc}; gen_freq(Dna, Len, Frequency, Acc, Dec) when Dec >= Len -> <> = Dna, Freq = dict:update_counter(Key, 1, Frequency), <<_, T/binary>> = Dna, gen_freq(T, Len, Freq, Acc +1, Dec -1); gen_freq(_, _, Frequency, Acc, _) -> {Frequency,Acc}. printf([],_) -> io:fwrite("\n"); printf([H |T],Tot)-> {Nucleoid,Cnt}=H, io:fwrite("~s ~.3f\n",[binary_to_list(Nucleoid),(Cnt*100.0)/Tot]), printf(T,Tot). write_count(Dna, Pattern) -> { Freq ,_} = gen_freq(Dna, size(Pattern)), case dict:find(Pattern,Freq) of {ok,Value} -> io:fwrite("~w\t~s\n",[Value,binary_to_list(Pattern)]); error -> io:fwrite("~w\t~s\n",[0,binary_to_list(Pattern)]) end. main() -> Seq = dna_seq(), printf(gen_freq(Seq,1)), printf(gen_freq(Seq,2)), write_count(Seq,<<"GGT">>), write_count(Seq,<<"GGTA">>), write_count(Seq,<<"GGTATT">>), write_count(Seq,<<"GGTATTTTAATT">>), write_count(Seq,<<"GGTATTTTAATTTATAGT">>), ok. Best regards, Vlad From ulf.wiger@REDACTED Fri Mar 17 12:04:55 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Fri, 17 Mar 2006 12:04:55 +0100 Subject: Iteration over lists Message-ID: Bjorn Gustavsson wrote: > > Your benchmark is not fair. > > What you call recursion is in fact iteration expressed as > tail-recursion. > Naturally it is faster. > > Real recursion would look this: > > recursion([H | T]) -> > [integer_to_list(H) | recursion(T)]; > recursion([]) -> []. > > A list comprehension will in fact be translated to the exact > same code a the function above. Of course one way to view the benchmark is that it gives a hint about the overhead of atually constructing the list when using list comprehension instead of lists:foreach(). A list comprehension which _didn't_ built the list and avoided using funs could then be significantly faster, right (that is, also faster than lists:foreach())? (: /Ulf W From ken@REDACTED Fri Mar 17 12:10:32 2006 From: ken@REDACTED (Kenneth Johansson) Date: Fri, 17 Mar 2006 12:10:32 +0100 Subject: The Computer Language Shootout In-Reply-To: References: Message-ID: <1142593833.3430.5.camel@tiger> On Fri, 2006-03-17 at 10:57 +0100, Ulf Wiger (AL/EAB) wrote: > One obvious first optimization would be to > not use string:str(Line, ">THREE Homo sapiens frequency"), > but rather to check whether the line actually starts > with ">THREE", thus: > > seek_three() -> > case io:get_line('') of > ">THREE" ++ _ -> found; > eof -> erlang:error(eof); > _ -> seek_three() > end. > > This tiny change alone gives a 18% speedup > on my machine. Really I tested and could not notice any change and to me that is not so strange since I can't imagine why having a longer string could slow us down since it will only match the first character three times and two out of them it will fail on the second char. only once will it do the whole string and that extra work will hardly be possible to measure. > Regards, > Ulf W > > > -----Original Message----- > > From: owner-erlang-questions@REDACTED > > [mailto:owner-erlang-questions@REDACTED] On Behalf Of > > Kenneth Johansson > > Sent: den 17 mars 2006 04:23 > > To: erlang-questions@REDACTED > > Subject: The Computer Language Shootout > > > > http://shootout.alioth.debian.org/gp4/benchmark.php?test=knucl > > eotide&lang=all > > > > I did an implementation in erlang for the knucleotide. > > > > And while the code is much shorter than C and fortran it's > > larger than ruby and python. But since this is my first real > > try at erlang I'm sure someone here can do significant improvement. > > > > Also the speed is a problem it's on my computer 8 times > > slower than the python version. > > > > From robert.virding@REDACTED Fri Mar 17 12:56:28 2006 From: robert.virding@REDACTED (Robert Virding) Date: Fri, 17 Mar 2006 12:56:28 +0100 Subject: Iteration over lists In-Reply-To: References: Message-ID: <441AA3EC.2070400@telia.com> Ulf Wiger (AL/EAB) skrev: > > Of course one way to view the benchmark is that it > gives a hint about the overhead of atually constructing > the list when using list comprehension instead of > lists:foreach(). A list comprehension which _didn't_ > built the list and avoided using funs could then be > significantly faster, right (that is, also faster than > lists:foreach())? (: I think that the list construction time is only significant here because what is actually done is so trivial. :-) Robert From bengt.kleberg@REDACTED Fri Mar 17 13:08:37 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Fri, 17 Mar 2006 13:08:37 +0100 Subject: Iteration over lists In-Reply-To: <441A81C7.9060008@mail.ru> References: <94B96B3383630441B365F7649034034802CBD99B@esealmw103.eemea.ericsson.se> <441A7814.8090702@ericsson.com> <441A81C7.9060008@mail.ru> Message-ID: <441AA6C5.8030006@ericsson.com> On 2006-03-17 10:30, Nick Linker wrote: > May be implicit apply/2 call takes part here? The definition of map/2 > is straightforward: you are correct. i was kind of thinking about how map/2 was documented, though. it might be unwise to take advantage of the implementation, if this goes outside of what the documentation says. bengt From emil.oberg@REDACTED Fri Mar 17 13:18:18 2006 From: emil.oberg@REDACTED (=?iso-8859-1?Q?Emil_=D6berg_=28LN/EAB=29?=) Date: Fri, 17 Mar 2006 13:18:18 +0100 Subject: Iteration over lists Message-ID: <94B96B3383630441B365F7649034034802CBDEBD@esealmw103.eemea.ericsson.se> That's a strange statement. There's nothing 'fair' when choosing implementation. Of course I implement recursive functions as tail recursion if possible, with reverse() if the order of the list is important. The thing is that by using map or list comprehension you provide more information that the compiler could use to optimize the code (you know that you are iterating elements in a list). Apparently this information is ignored by the compiler and the constructs are only syntactic sugar (no flames, I know that readability is important, but in this case that doesn't give the customer more bang for the bucks). I was also informed that map() is implemented in a beam, not in erts, so this explains why it is slower. But still there is list comprehension. Since tail recursion is so much faster, why isn't list comprehension implemented as such, and why isn't map()? As it is now I will have to recommend designers not to use map() and absolutely not list comprehension on large lists. /Emil -----Original Message----- From: Bjorn Gustavsson [mailto:bjorn@REDACTED] Sent: den 17 mars 2006 11:21 To: Emil ?berg (LN/EAB) Cc: erlang-questions@REDACTED Subject: Re: Iteration over lists Your benchmark is not fair. What you call recursion is in fact iteration expressed as tail-recursion. Naturally it is faster. Real recursion would look this: recursion([H | T]) -> [integer_to_list(H) | recursion(T)]; recursion([]) -> []. A list comprehension will in fact be translated to the exact same code a the function above. I did my own measurements, comparing my revised recursion/1 function to your other functions (except mapping/1). I used my own framework for benchmarking, found at my home page (http://www.erlang.se/~bjorn). The actual code can be found at the end of this email. I used R10B-10. Name Relative time ---- ------------- recursion 1.00 listcompr 1.01 revrecursion 1.12 mapfun 1.14 My framework is careful to run each new benchmark in fresh process. Did you run each test in a new process? If not, differences in heap sizes will seriously bias the results. As can been seen, recursion and list comprehension are about the same. (Since the generated code is the same, the differences are due to random fluctuations; actually, sometimes they both ended up at 1.0.) Using tail-recursion and then reversing is somewhat slower than either recursion or list comprehensions. My list size was 6000. I expect that if the lists are really huge, revrecursion will win. Using lists:map/2 and a fun is slightly slower than recursing directly because of the overhead of calling the fun. Note that in older versions of OTP, list comprenhensions used to be implemented in terms of funs. That is no longer the case. Also, funs were much slower in older versions of OTP. Conclusions: 1) If you can get away with producing a list with elements reversed, an iterative function using tail-recursion is fastest. 2) Otherwise, if your lists are up to say 10000 elements, use a list comprehension and be happy. :-) 3) If your lists are larger than that, you should do some measurements and see if maybe tail-recursion followed by a lists:reverse/1 might be faster. /Bjorn Emil ?berg (LN/EAB) writes: > Hi. > > Sitting here doing some profiling on our system and started to wonder about the performance of recursion/map/list comprehensions. Made a small test program just to compare them and got some (to me at least) surprising results. Lists:map() is nearly twice as slow as recursion, even when list is reversed, and list comprehension is even slower! Could anyone explain this? It is kind of disturbing to be forced to use messy recursion just because your code is time-critical. > -- Bj?rn Gustavsson, Erlang/OTP, Ericsson AB -module(listify_bm). -export([benchmarks/0, recursion/1, revrecursion/1, mapfun/1, listcompr/1]). benchmarks() -> {1500, [recursion, revrecursion, mapfun, listcompr]}. recursion(Iter) -> recursion_1(Iter, make_list()). recursion_1(0, _) -> ok; recursion_1(Iter, L) -> recursion_2(L), recursion_1(Iter-1, L). recursion_2([H | T]) -> [integer_to_list(H) | recursion_2(T)]; recursion_2([]) -> []. revrecursion(Iter) -> revrecursion_1(Iter, make_list()). revrecursion_1(0, _) -> ok; revrecursion_1(Iter, L) -> revrecursion_2(L, []), revrecursion_1(Iter-1, L). revrecursion_2([H | T], Acc) -> revrecursion_2(T, [integer_to_list(H) | Acc]); revrecursion_2([], Acc) -> lists:reverse(Acc). mapfun(Iter) -> mapfun_1(Iter, make_list()). mapfun_1(0, _) -> ok; mapfun_1(Iter, L) -> mapfun_2(L), mapfun_1(Iter-1, L). mapfun_2(L) -> lists:map(fun erlang:integer_to_list/1, L). listcompr(Iter) -> listcompr_1(Iter, make_list()). listcompr_1(0, _) -> ok; listcompr_1(Iter, L) -> listcompr_2(L), listcompr_1(Iter-1, L). listcompr_2(L) -> [erlang:integer_to_list(X) || X <- L]. make_list() -> lists:seq(1, 6000). From bjorn@REDACTED Fri Mar 17 14:10:13 2006 From: bjorn@REDACTED (Bjorn Gustavsson) Date: 17 Mar 2006 14:10:13 +0100 Subject: Iteration over lists In-Reply-To: <94B96B3383630441B365F7649034034802CBDEBD@esealmw103.eemea.ericsson.se> References: <94B96B3383630441B365F7649034034802CBDEBD@esealmw103.eemea.ericsson.se> Message-ID: Which release of Erlang/OTP did you benchmark? Your numbers might be correct if you benchmarked an old release of Erlang/OTP. If you benchmarked R10B, they don't make sense. Did you start each new test in a newly spawned Erlang process? How did you measure the time? Emil ?berg (LN/EAB) writes: > That's a strange statement. There's nothing 'fair' when choosing implementation. It is unfair if they don't produce the same result. > Of course I implement recursive functions as tail recursion if possible, with reverse() if the order of the list is important. The compiler has no way of knowing whether it is OK for a list comprehension to return the list in reverse order. > The thing is that by using map or list comprehension you provide more information that the compiler could use to optimize the code (you know that you are iterating elements in a list). No. The compiler can know in both cases that the user INTENDED the input to be a list, but that does not help. At run-time the function or list comprehension can still be passed something that is not a list, so all the run-time tests must still be there. > I was also informed that map() is implemented in a beam, not in erts, so this explains why it is slower. But still there is list comprehension. See my comments above, and my numbers. > Since tail recursion is so much faster, why isn't list comprehension implemented as such, and why isn't map()? Because the list would be in the wrong order, and as my benchmark showed, for lists of 6000 elements running R10B on Solari8/Sparc workstation, a list comprehension is slightly faster than a tail-recursion followed by a reverse. > As it is now I will have to recommend designers not to use map() and absolutely not list comprehension on large lists. If you use an old Erlang/OTP, that is a good recommendation. If you use R10B, my benchmark showed that list comprehensions are faster than the lists:map/2. Feel free to run my benchmark yourself if you don't trust my results. /Bjorn -- Bj?rn Gustavsson, Erlang/OTP, Ericsson AB From bengt.kleberg@REDACTED Fri Mar 17 14:21:43 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Fri, 17 Mar 2006 14:21:43 +0100 Subject: Iteration over lists In-Reply-To: <441AA3EC.2070400@telia.com> References: <441AA3EC.2070400@telia.com> Message-ID: <441AB7E7.1020508@ericsson.com> On 2006-03-17 12:56, Robert Virding wrote: ...deleted > I think that the list construction time is only significant here because > what is actually done is so trivial. :-) this is a very good insight. imho. i could be wrong :-) so i assume we are interested in, not the time to do integer_to_list/1, but in the time it takes to go over the list. i subsequently modified the test to do as little as possible apart from going over the list. eg i replaced integer_to_list/1 with something less resource intensive, a( _A ) -> a. in microseconds i got the following for 3 runs. the bit about ''true'' below is just a check that no cheating (by the compiler :-) took place: recursion: 2629 true revrecursion: 3652 true mapfun: 3547 true listcompr: 1737 true recursion: 2310 true revrecursion: 3603 true mapfun: 3703 true listcompr: 1692 true recursion: 2268 true revrecursion: 3651 true mapfun: 3535 true listcompr: 1685 true bengt -module(iter). -export([timeing/1, main/1]). -export([recursion/1, revrecursion/1, mapfun/1, listcompr/1]). main( [Progname] ) -> main( [Progname, '2'] ); main( [_Progname, Length|_T] ) -> timeing(erlang:list_to_integer( erlang:atom_to_list(Length) )), init:stop(). timeing( Length ) -> List = lists:seq(1, Length), Result = lists:map( fun (Function) -> {Function, timer:tc( ?MODULE, Function, [List] )} end, [recursion, revrecursion, mapfun, listcompr]), lists:foreach( fun ({Function, {Time, R_list}}) -> io:fwrite( "~w: ~w ~w~n", [Function, Time, erlang:length(R_list) == Length] ) end, Result ). recursion(L) -> recursion(L, []). recursion([H | T], Acc) -> recursion(T, [a(H) | Acc]); recursion([], Acc) -> Acc. revrecursion(L) -> revrecursion(L, []). revrecursion([H | T], Acc) -> revrecursion(T, [a(H) | Acc]); revrecursion([], Acc) -> lists:reverse(Acc). mapfun(L) -> lists:map(fun a/1, L). listcompr(L) -> [a(X) || X <- L]. a( _A ) -> a. From vlad.xx.dumitrescu@REDACTED Fri Mar 17 14:58:39 2006 From: vlad.xx.dumitrescu@REDACTED (Vlad Dumitrescu XX (LN/EAB)) Date: Fri, 17 Mar 2006 14:58:39 +0100 Subject: Iteration over lists Message-ID: <11498CB7D3FCB54897058DE63BE3897C01620405@esealmw105.eemea.ericsson.se> Hi, For reference, my compiler does cheat :-) (emacs@REDACTED)9> iter:timeing(100000). recursion: 1 true revrecursion: 15998 true mapfun: 15000 true listcompr: 1 true In any case, I think each function should be called in a separate fresh process, as Bj?rn also pointed out. Otherwise the results might not be conclusive. Regards, Vlad From eduardo@REDACTED Fri Mar 17 14:59:15 2006 From: eduardo@REDACTED (Eduardo Figoli (INS)) Date: Fri, 17 Mar 2006 10:59:15 -0300 Subject: Search in record list Message-ID: <01ab01c649ca$fcc506b0$4a00a8c0@Inswitch251> Hi, Does someone know an easy way to search by a field of a record on a list of records? For example: [ {process_info, {Id, Pid, QueueLength}}...] Find the record with Pid element in the list. thanks, Eduardo Prepaid Expertise - Programmable Switches Powered by Ericsson Licensed Technology Eng. Eduardo Figoli - Development Center - IN Switch Solutions Inc. Headquarters - Miami-U.S.A. Tel: 1305-3578076 Fax: 1305-7686260 Development Center - Montevideo - Uruguay Tel/Fax: 5982-7104457 e-mail: eduardo@REDACTED -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 1429 bytes Desc: not available URL: From erlang@REDACTED Fri Mar 17 14:59:48 2006 From: erlang@REDACTED (Inswitch Solutions) Date: Fri, 17 Mar 2006 10:59:48 -0300 Subject: Search in record list Message-ID: <01b701c649cb$10dc9c80$4a00a8c0@Inswitch251> Hi, Does someone know an easy way to search by a field of a record on a list of records? For example: [ {process_info, {Id, Pid, QueueLength}}...] Find the record with Pid element in the list. thanks, Eduardo Prepaid Expertise - Programmable Switches Powered by Ericsson Licensed Technology Eng. Eduardo Figoli - Development Center - IN Switch Solutions Inc. Headquarters - Miami-U.S.A. Tel: 1305-3578076 Fax: 1305-7686260 Development Center - Montevideo - Uruguay Tel/Fax: 5982-7104457 e-mail: eduardo@REDACTED -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 1429 bytes Desc: not available URL: From emil.oberg@REDACTED Fri Mar 17 15:09:31 2006 From: emil.oberg@REDACTED (=?iso-8859-1?Q?Emil_=D6berg_=28LN/EAB=29?=) Date: Fri, 17 Mar 2006 15:09:31 +0100 Subject: Iteration over lists Message-ID: <94B96B3383630441B365F7649034034802CBE015@esealmw103.eemea.ericsson.se> I don't think you can use timer:tc() to do the measuring here. You need to measure cpu time, not elapsed. List comprehension will not suspend (if I have understood the erts correct), but the rest will, so it is natrual that it takes less real time. I used fprof with cpu_time to get the values, and that in itself will of course cause measurment differneces, (I blame Schr?dinger and his cat...) but not to much. /Emil -----Original Message----- From: owner-erlang-questions@REDACTED [mailto:owner-erlang-questions@REDACTED] On Behalf Of Bengt Kleberg Sent: den 17 mars 2006 14:22 To: erlang-questions@REDACTED Subject: Re: Iteration over lists On 2006-03-17 12:56, Robert Virding wrote: ...deleted > I think that the list construction time is only significant here > because what is actually done is so trivial. :-) this is a very good insight. imho. i could be wrong :-) so i assume we are interested in, not the time to do integer_to_list/1, but in the time it takes to go over the list. i subsequently modified the test to do as little as possible apart from going over the list. eg i replaced integer_to_list/1 with something less resource intensive, a( _A ) -> a. in microseconds i got the following for 3 runs. the bit about ''true'' below is just a check that no cheating (by the compiler :-) took place: recursion: 2629 true revrecursion: 3652 true mapfun: 3547 true listcompr: 1737 true recursion: 2310 true revrecursion: 3603 true mapfun: 3703 true listcompr: 1692 true recursion: 2268 true revrecursion: 3651 true mapfun: 3535 true listcompr: 1685 true bengt -module(iter). -export([timeing/1, main/1]). -export([recursion/1, revrecursion/1, mapfun/1, listcompr/1]). main( [Progname] ) -> main( [Progname, '2'] ); main( [_Progname, Length|_T] ) -> timeing(erlang:list_to_integer( erlang:atom_to_list(Length) )), init:stop(). timeing( Length ) -> List = lists:seq(1, Length), Result = lists:map( fun (Function) -> {Function, timer:tc( ?MODULE, Function, [List] )} end, [recursion, revrecursion, mapfun, listcompr]), lists:foreach( fun ({Function, {Time, R_list}}) -> io:fwrite( "~w: ~w ~w~n", [Function, Time, erlang:length(R_list) == Length] ) end, Result ). recursion(L) -> recursion(L, []). recursion([H | T], Acc) -> recursion(T, [a(H) | Acc]); recursion([], Acc) -> Acc. revrecursion(L) -> revrecursion(L, []). revrecursion([H | T], Acc) -> revrecursion(T, [a(H) | Acc]); revrecursion([], Acc) -> lists:reverse(Acc). mapfun(L) -> lists:map(fun a/1, L). listcompr(L) -> [a(X) || X <- L]. a( _A ) -> a. From bengt.kleberg@REDACTED Fri Mar 17 15:12:06 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Fri, 17 Mar 2006 15:12:06 +0100 Subject: Search in record list In-Reply-To: <01b701c649cb$10dc9c80$4a00a8c0@Inswitch251> References: <01b701c649cb$10dc9c80$4a00a8c0@Inswitch251> Message-ID: <441AC3B6.2080902@ericsson.com> On 2006-03-17 14:59, Inswitch Solutions wrote: > > Hi, > > Does someone know an easy way to search by a field of a record on a list > of records? > > For example: > [ {process_info, {Id, Pid, QueueLength}}...] > > Find the record with Pid element in the list. is this what you want: lists:keysearch(Pid, #process_info.pid, List). bengt From ulf.wiger@REDACTED Fri Mar 17 15:14:06 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Fri, 17 Mar 2006 15:14:06 +0100 Subject: The Computer Language Shootout Message-ID: Kenneth Johansson wrote: > > Really I tested and could not notice any change and to me > that is not so strange since I can't imagine why having a > longer string could slow us down since it will only match the > first character three times and two out of them it will fail > on the second char. only once will it do the whole string and > that extra work will hardly be possible to measure. I should clarify that the 18% improvement was on the seek_three() function, and not the whole benchmark. Since the whole benchmark takes about 5 seconds on my machine, and scanning to the right section takes some 50 ms, there are certainly other things one would want to focus on first. This happened to be the first thing I saw, and it was easy enough to change. Still, regarding your comment, ... string:str() looks like this: str(S, Sub) -> str(S, Sub, 1). str([C|S], [C|Sub], I) -> case prefix(Sub, S) of true -> I; false -> str(S, [C|Sub], I+1) end; str([_|S], Sub, I) -> str(S, Sub, I+1); str([], _Sub, _I) -> 0. That is, it _will_ scan the whole line any time there isn't a match (and there are 836 non-matching lines preceding the ">THREE" line, most of them are 60 bytes long). In other scenarios, this difference could certainly be significant. You could have used string:prefix(">THREE", Line) instead, and it would have done what you wanted. /Ulf W From serge@REDACTED Fri Mar 17 15:25:39 2006 From: serge@REDACTED (Serge Aleynikov) Date: Fri, 17 Mar 2006 09:25:39 -0500 Subject: Search in record list In-Reply-To: <01ab01c649ca$fcc506b0$4a00a8c0@Inswitch251> References: <01ab01c649ca$fcc506b0$4a00a8c0@Inswitch251> Message-ID: <441AC6E3.70401@hq.idt.net> [R || {process_info, {Id,Pid,QueueLength}} = R <- List, Pid == YourPid]. Eduardo Figoli (INS) wrote: > Hi, > > Does someone know an easy way to search by a field of a record on a list > of records? > > For example: > [ {process_info, {Id, Pid, QueueLength}}...] > > Find the record with Pid element in the list. > > > thanks, Eduardo > > > > > *Prepaid Expertise - Programmable Switches > Powered by Ericsson Licensed Technology > Eng. Eduardo Figoli - Development Center - IN Switch Solutions Inc. > Headquarters - Miami-U.S.A. Tel: 1305-3578076 Fax: 1305-7686260 > Development Center - Montevideo - Uruguay Tel/Fax: 5982-7104457 > e-mail: eduardo@REDACTED > * > > ** From camster@REDACTED Fri Mar 17 15:23:36 2006 From: camster@REDACTED (Richard Cameron) Date: Fri, 17 Mar 2006 14:23:36 +0000 Subject: Search in record list In-Reply-To: <01b701c649cb$10dc9c80$4a00a8c0@Inswitch251> References: <01b701c649cb$10dc9c80$4a00a8c0@Inswitch251> Message-ID: <423E4BEF-4DC8-4FDD-BCAF-EAC2CECAAEE3@citeulike.org> On 17 Mar 2006, at 13:59, Inswitch Solutions wrote: > Does someone know an easy way to search by a field of a record on a > list of records? > > For example: > [ {process_info, {Id, Pid, QueueLength}}...] > > Find the record with Pid element in the list. You could deal with this example using list comprehensions: [ R || R={process_info, {, PidX, _}} <- MyList, PidX==Pid ] ... although this example isn't actually a list of "records" as such. If you really did have a list of records (proper Erlang record with lots of # characters in the code to prove it) then you could say something like this: [ R || R=#process_info{pid=PidX} <- MyList, PidX==Pid ] Richard. From ola.a.andersson@REDACTED Fri Mar 17 15:31:40 2006 From: ola.a.andersson@REDACTED (Ola Andersson A (AL/EAB)) Date: Fri, 17 Mar 2006 15:31:40 +0100 Subject: Search in record list Message-ID: <148408C0A2D44A41AB295D74E183997501295C29@esealmw105.eemea.ericsson.se> Isn't that like cheating a bit? Using the fact that records are tuples. Alternative: F = fun(Rec) when Rec#process_info.pid == Pid -> false; (Rec) -> true end, [Arec | _] = lists:dropwhile(F, List). There are probably about a hundred more efficient ways to do it, but anyway. /OLA. > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Bengt Kleberg > Sent: den 17 mars 2006 15:12 > To: erlang-questions@REDACTED > Subject: Re: Search in record list > > On 2006-03-17 14:59, Inswitch Solutions wrote: > > > > Hi, > > > > Does someone know an easy way to search by a field of a record on a > > list of records? > > > > For example: > > [ {process_info, {Id, Pid, QueueLength}}...] > > > > Find the record with Pid element in the list. > > is this what you want: > lists:keysearch(Pid, #process_info.pid, List). > > > bengt > From bengt.kleberg@REDACTED Fri Mar 17 15:32:57 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Fri, 17 Mar 2006 15:32:57 +0100 Subject: Iteration over lists In-Reply-To: <11498CB7D3FCB54897058DE63BE3897C01620405@esealmw105.eemea.ericsson.se> References: <11498CB7D3FCB54897058DE63BE3897C01620405@esealmw105.eemea.ericsson.se> Message-ID: <441AC899.5000809@ericsson.com> On 2006-03-17 14:58, Vlad Dumitrescu XX (LN/EAB) wrote: > Hi, > > For reference, my compiler does cheat :-) that is a good compiler. > (emacs@REDACTED)9> iter:timeing(100000). > recursion: 1 true > revrecursion: 15998 true > mapfun: 15000 true > listcompr: 1 true > > In any case, I think each function should be called in a separate fresh process, as Bj?rn also pointed out. Otherwise the results might not be conclusive. new results from new code: recursion( 10000 ) => 2648 us 10000 length revrecursion( 10000 ) => 2551 us 10000 length mapfun( 10000 ) => 4521 us 10000 length listcompr( 10000 ) => 2521 us 10000 length recursion( 10000 ) => 2564 us 10000 length revrecursion( 10000 ) => 2910 us 10000 length mapfun( 10000 ) => 4988 us 10000 length listcompr( 10000 ) => 2815 us 10000 length recursion( 10000 ) => 2662 us 10000 length revrecursion( 10000 ) => 2618 us 10000 length mapfun( 10000 ) => 4501 us 10000 length listcompr( 10000 ) => 2487 us 10000 length bengt new code: -module(iter). -export([timeing/1, main/1]). -export([recursion/1, revrecursion/1, mapfun/1, listcompr/1]). main( [Progname] ) -> main( [Progname, '2'] ); main( [_Progname, Length|_T] ) -> timeing(erlang:list_to_integer( erlang:atom_to_list(Length) )), init:stop(). timeing( Length ) -> List = lists:seq(1, Length), Pid = erlang:self(), Result = lists:map( fun (Function) -> {Function, run_in_process( Pid, Function, List )} end, [recursion, revrecursion, mapfun, listcompr]), lists:foreach( fun ({Function, {Time, Result_list}}) -> io:fwrite( "~w( ~w ) => ~w us ~w length~n", [Function, Length, Time, erlang:length(Result_list)] ) end, Result ). recursion(L) -> recursion(L, []). recursion([H | T], Acc) -> recursion(T, [a(H) | Acc]); recursion([], Acc) -> Acc. revrecursion(L) -> revrecursion(L, []). revrecursion([H | T], Acc) -> revrecursion(T, [a(H) | Acc]); revrecursion([], Acc) -> lists:reverse(Acc). mapfun(L) -> lists:map(fun a/1, L). listcompr(L) -> [a(X) || X <- L]. a( _A ) -> a. run_in_process( Reply, Function, List ) -> Fun = fun() -> time( Reply, Function, List ) end, erlang:spawn( Fun ), receive Result -> Result end. time( Reply, Function, List ) -> Reply ! timer:tc( ?MODULE, Function, [List] ). From bengt.kleberg@REDACTED Fri Mar 17 15:36:58 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Fri, 17 Mar 2006 15:36:58 +0100 Subject: Iteration over lists In-Reply-To: <94B96B3383630441B365F7649034034802CBE015@esealmw103.eemea.ericsson.se> References: <94B96B3383630441B365F7649034034802CBE015@esealmw103.eemea.ericsson.se> Message-ID: <441AC98A.4010702@ericsson.com> On 2006-03-17 15:09, Emil ?berg (LN/EAB) wrote: > I don't think you can use timer:tc() to do the measuring here. You > need to measure cpu time, not elapsed. List comprehension will not > suspend (if I have understood the erts correct), but the rest will, > so it is natrual that it takes less real time. I used fprof with i do not understand. imho elapsed time is more important than cpu time. bengt From bengt.kleberg@REDACTED Fri Mar 17 15:43:59 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Fri, 17 Mar 2006 15:43:59 +0100 Subject: Search in record list In-Reply-To: <148408C0A2D44A41AB295D74E183997501295C29@esealmw105.eemea.ericsson.se> References: <148408C0A2D44A41AB295D74E183997501295C29@esealmw105.eemea.ericsson.se> Message-ID: <441ACB2F.1010905@ericsson.com> On 2006-03-17 15:31, Ola Andersson A (AL/EAB) wrote: > Isn't that like cheating a bit? > Using the fact that records are tuples. i do not think so. records are documented as beeing tuples. it is neccessary to know this to avoid (among others) the following: -record( rec, {a} ). a( T ) when is_tuple(T) -> tuple; a( R ) when is_record(R, rec) -> record. a( #rec{} ) => tuple bengt From bengt.kleberg@REDACTED Fri Mar 17 16:01:21 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Fri, 17 Mar 2006 16:01:21 +0100 Subject: Search in record list In-Reply-To: <148408C0A2D44A41AB295D74E183997501295C29@esealmw105.eemea.ericsson.se> References: <148408C0A2D44A41AB295D74E183997501295C29@esealmw105.eemea.ericsson.se> Message-ID: <441ACF41.8040905@ericsson.com> On 2006-03-17 15:31, Ola Andersson A (AL/EAB) wrote: > Isn't that like cheating a bit? > Using the fact that records are tuples. imho it is not cheating. see separate email. the problem with keysearch/3 is that it only finds the first record. there might be more. bengt From roger.larsson@REDACTED Fri Mar 17 16:02:11 2006 From: roger.larsson@REDACTED (Roger Larsson) Date: Fri, 17 Mar 2006 16:02:11 +0100 Subject: Iteration over lists In-Reply-To: <441AC98A.4010702@ericsson.com> References: <94B96B3383630441B365F7649034034802CBE015@esealmw103.eemea.ericsson.se> <441AC98A.4010702@ericsson.com> Message-ID: <200603171602.11824.roger.larsson@norran.net> On fredag 17 mars 2006 15.36, Bengt Kleberg wrote: > On 2006-03-17 15:09, Emil ?berg (LN/EAB) wrote: > > I don't think you can use timer:tc() to do the measuring here. You > > need to measure cpu time, not elapsed. List comprehension will not > > suspend (if I have understood the erts correct), but the rest will, > > so it is natrual that it takes less real time. I used fprof with > > i do not understand. imho elapsed time is more important than cpu time. > Usually you can trust CPU time more. The OS might have decided to clean up some memory for file accesses or suspended your process to run other more important stuff. CPU time is the time actually spent in your program. But... In Erlang garbage collection can happen any time... My results varies wildly (R10B-10): > iter:timeing( 100000 ). recursion: 7153 true revrecursion: 9523 true mapfun: 18846 true listcompr: 11853 true recursion: 8717 true revrecursion: 49129 true mapfun: 35269 true listcompr: 12383 true recursion: 13909 true revrecursion: 12137 true mapfun: 28469 true listcompr: 11351 true This could be due to several reasons: - I am running an AMD in Cool'nQuiet (will it take time to adjust CPU frequency?) - When garbage collection happens. - When erlang is scheduled away from the CPU /RogerL From bengt.kleberg@REDACTED Fri Mar 17 16:14:55 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Fri, 17 Mar 2006 16:14:55 +0100 Subject: Iteration over lists In-Reply-To: <200603171602.11824.roger.larsson@norran.net> References: <94B96B3383630441B365F7649034034802CBE015@esealmw103.eemea.ericsson.se> <441AC98A.4010702@ericsson.com> <200603171602.11824.roger.larsson@norran.net> Message-ID: <441AD26F.50502@ericsson.com> On 2006-03-17 16:02, Roger Larsson wrote: ...deleted > Usually you can trust CPU time more. The OS might have decided to clean > up some memory for file accesses or suspended your process to run > other more important stuff. CPU time is the time actually spent in your > program. But... In Erlang garbage collection can happen any time... i would assume that these things can happen even if we are running a ''real program''. if the differences between what we are trying to measure are so small, so that the os activity is more important than the test programs... why worry about the differences? > My results varies wildly (R10B-10): that is why i give more than one result. bengt From ola.a.andersson@REDACTED Fri Mar 17 16:23:25 2006 From: ola.a.andersson@REDACTED (Ola Andersson A (AL/EAB)) Date: Fri, 17 Mar 2006 16:23:25 +0100 Subject: Search in record list Message-ID: <148408C0A2D44A41AB295D74E183997501295CAC@esealmw105.eemea.ericsson.se> lists:filter/2 may do the trick in that case. /OLA. > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Bengt Kleberg > Sent: den 17 mars 2006 16:01 > To: erlang-questions@REDACTED > Subject: Re: Search in record list > > On 2006-03-17 15:31, Ola Andersson A (AL/EAB) wrote: > > Isn't that like cheating a bit? > > Using the fact that records are tuples. > > imho it is not cheating. see separate email. > > the problem with keysearch/3 is that it only finds the first record. > there might be more. > > > bengt > From roger.larsson@REDACTED Fri Mar 17 17:13:30 2006 From: roger.larsson@REDACTED (Roger Larsson) Date: Fri, 17 Mar 2006 17:13:30 +0100 Subject: erlang:tc() returning 1 (Was: Re: Iteration over lists) In-Reply-To: <11498CB7D3FCB54897058DE63BE3897C01620405@esealmw105.eemea.ericsson.se> References: <11498CB7D3FCB54897058DE63BE3897C01620405@esealmw105.eemea.ericsson.se> Message-ID: <200603171713.30461.roger.larsson@norran.net> On fredag 17 mars 2006 14.58, you wrote: > Hi, > > For reference, my compiler does cheat :-) > > (emacs@REDACTED)9> iter:timeing(100000). > recursion: 1 true > revrecursion: 15998 true > mapfun: 15000 true > listcompr: 1 true I think something is not correct here... I have seen this too... (Linux) So I did a test program lists:map(fun (_) ->timer:tc(erlang, is_boolean, [1]) end, lists:seq(1, 100)). [{7,false}, {5,false}, {5,false}, {4,false}, {4,false}, {4,false}, {5,false}, {5,false}, {5,false}, {5,false}, {5,false}, {5,false}, {5,false}, {5,false}, {5,false}, {4,false}, {5,false}, {5,false}, {4,false}, {5,false}, {5,false}, {26,false}, {5,false}, {5,false}, - - - The actual time for this function is probably 4-5 us. First time (7 us) is always longer and one (26 us) [something externally happened - erlang runtime or system] But I usually do not get 1 us, but once I did. [{6,false}, {5,false}, {5,false}, {5,false}, {5,false}, {5,false}, {1,false}, {1,false}, {1,false}, {1,false}, {1,false}, {1,false}, {1,false}, {1,false}, {1,false}, {1,false}, - - - >From the description of erlang:now() "It is also guaranteed that subsequent calls to this BIF returns continuously increasing values" I wonder if we are seeing a case where system time is going backwards and erlang:now is handling this by returning values increasing by one until real time has caught up. [but it takes too long time several calls... should not be likely is something wrong in the code - wrapping?] /RogerL From richardc@REDACTED Fri Mar 17 17:57:29 2006 From: richardc@REDACTED (Richard Carlsson) Date: Fri, 17 Mar 2006 17:57:29 +0100 Subject: Iteration over lists In-Reply-To: References: <94B96B3383630441B365F7649034034802CBD99B@esealmw103.eemea.ericsson.se> Message-ID: <441AEA79.2040001@it.uu.se> Bjorn Gustavsson wrote: > What you call recursion is in fact iteration expressed as tail-recursion. > Naturally it is faster. > > Real recursion would look this: > > recursion([H | T]) -> > [integer_to_list(H) | recursion(T)]; > recursion([]) -> []. > > A list comprehension will in fact be translated to the exact same > code a the function above. > [...] > > Using tail-recursion and then reversing is somewhat slower than either > recursion or list comprehensions. My list size was 6000. I expect that > if the lists are really huge, revrecursion will win. Also, if each iteration does more work than just calling integer_to_list, recursing, and consing up a new element, it could make the stack frame size bigger, which could significantly lower the break-even point in favour of the tail recursive version with reverse at the end. (This is of course assuming that the Beam compiler does not use stack trimming techniques, but I don't think it does.) It could be worth including such an example in the benchmarks. /Richard "too lazy to do my own measurements" C. From ken@REDACTED Sat Mar 18 01:44:37 2006 From: ken@REDACTED (Kenneth Johansson) Date: Sat, 18 Mar 2006 01:44:37 +0100 Subject: The Computer Language Shootout In-Reply-To: References: Message-ID: <1142642677.3413.4.camel@tiger> On Fri, 2006-03-17 at 11:48 +0100, Vlad Dumitrescu wrote: > Hi, > Also, an even better result (~half the time) can be obtained by using > binaries instead of lists: Nice I copied that and did some small line reducing changes If no one has any other suggestions I submit this version some time next week. -------------- next part -------------- -module(knucleotide). -export([main/0]). %% turn characters a..z to uppercase and strip out any newline to_upper_no_nl(Str) -> to_upper_no_nl(Str, []). to_upper_no_nl([C|Cs], Acc) when C >= $a, C =< $z -> to_upper_no_nl(Cs, [C-($a-$A)| Acc]); to_upper_no_nl([C|Cs], Acc) when C == $\n -> to_upper_no_nl(Cs, Acc); to_upper_no_nl([C|Cs], Acc) -> to_upper_no_nl(Cs, [C | Acc]); to_upper_no_nl([], Acc) -> lists:reverse(Acc). % Read in lines from stdin and discard them until a line starting with % >THREE are reached. seek_three() -> case io:get_line('') of ">TH" ++ _ -> found; eof -> erlang:error(eof); _ -> seek_three() end. %% Read in lines from stdin until eof. %% Lines are converted to upper case and put into a single list. dna_seq() -> seek_three(), dna_seq([]). dna_seq( Seq ) -> case io:get_line('') of eof -> list_to_binary(lists:reverse(Seq)); Line -> Uline = to_upper_no_nl(Line), dna_seq([Uline|Seq]) end. %% Create a dictinary with the dna sequence as key and the number of times %% it was in the original sequence as value. %% Len is the number of basepairs to use as the key. gen_freq(Dna, Len) -> gen_freq(Dna, Len, dict:new(),0,size(Dna)). gen_freq(<<>>, _, Frequency, Acc, _) -> {Frequency,Acc}; gen_freq(Dna, Len, Frequency, Acc, Dec) when Dec >= Len -> <> = Dna, Freq = dict:update_counter(Key, 1, Frequency), <<_, T/binary>> = Dna, gen_freq(T, Len, Freq, Acc +1, Dec -1); gen_freq(_, _, Frequency, Acc, _) -> {Frequency,Acc}. %% Print the frequency table printf({Frequency, Tot}) -> printf(lists:reverse(lists:keysort(2,dict:to_list(Frequency))),Tot). printf([],_) -> io:fwrite("\n"); printf([H |T],Tot)-> {Nucleoid,Cnt}=H, io:fwrite("~s ~.3f\n",[binary_to_list(Nucleoid),(Cnt*100.0)/Tot]), printf(T,Tot). write_count(Dna, Pattern) -> { Freq ,_} = gen_freq(Dna, size(Pattern)), case dict:find(Pattern,Freq) of {ok,Value} -> io:fwrite("~w\t~s\n",[Value,binary_to_list(Pattern)]); error -> io:fwrite("~w\t~s\n",[0,binary_to_list(Pattern)]) end. main() -> Seq = dna_seq(), lists:foreach(fun(H) -> printf(gen_freq(Seq,H)) end, [1,2]), lists:foreach(fun(H) -> write_count(Seq,H) end, [<<"GGT">>,<<"GGTA">>,<<"GGTATT">>,<<"GGTATTTTAATT">>,<<"GGTATTTTAATTTATAGT">>]), halt(0). From tony@REDACTED Sat Mar 18 11:09:01 2006 From: tony@REDACTED (Tony Rogvall) Date: Sat, 18 Mar 2006 11:09:01 +0100 Subject: missing lists functions Message-ID: <42800609-C81A-46EE-A81D-017CD9B2DCF5@rogvall.com> Hi list! I am missing some lists function. lists:keysearch_and_delete(Key,Pos,List) Like keysearch but also deletes the element if found. return false - when not found {value,Value,List'} - when found the List' is the new list without the element lists:replace_or_insert(Key, Pos, List, New) Like keyreplace but also add the element to the end of list if element is not found. return List' where item is item replace or added to the list (head or tail). (possibly a sorted version of replace_or_insert could be nice) If this is common enough perhaps some library addition could be done ? I tend to write code that looking like this: Delete case: case lists:keysearch(Key, Pos, List) of false -> recurse(State); {value,{_,Value}} -> List1 = lists:keydelete(Key, Pos, List), recurse(State#state { list = List1}) end. "Reduce to" case lists:keysearch_and_delete(Key,Pos,List) of false -> recurse(State); {value,{_,Value}, List1} -> recurse(State#state { list = List1 } end. Update case: case lists:keysearch(Key, Pos, List) of false -> List1 = [{Key,NewValue} | List], recurse(State#state { list = List1 }); {value,{_,Value}} -> Do some thing List1 = lists:keyreplace(Key, Pos, List, {Key,NewValue}), recurse(State#state { list = List1}) end. Reduce to List1 = lists:keysearch_or_insert(Key, Pos, List, {Key,NewValue}), recurse(State#state { list = List1}). /Tony From ulf.wiger@REDACTED Sat Mar 18 12:05:53 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Sat, 18 Mar 2006 05:05:53 -0600 Subject: The Computer Language Shootout Message-ID: Here's a version that uses ets instead of dict. In order to make sure the ets tables were removed in an orderly fashion, I made a small wrapper function. new_hash(F) -> T = ets:new(hash, [set]), Res = F(T), ets:delete(T), Res. main() -> Seq = dna_seq(), lists:foreach(fun(H) -> new_hash(fun(T) -> printf(gen_freq(T,Seq,H)) end) end, [1,2]), lists:foreach(fun(H) -> write_count(Seq,H) end, [<<"GGT">>,<<"GGTA">>,<<"GGTATT">>,<<"GGTATTTTAATT">>,<<"GGTATTTTAATTTAT AGT">>]), halt(0). Running time on my machine went down from 3.74 sec to 1.47. A 70% improvement (overall this time. ;) The ratio may be different on hipe. I haven't checked. Regards, Ulf W > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of > Kenneth Johansson > Sent: den 18 mars 2006 01:45 > To: Vlad Dumitrescu > Cc: erlang-questions@REDACTED > Subject: RE: The Computer Language Shootout > > On Fri, 2006-03-17 at 11:48 +0100, Vlad Dumitrescu wrote: > > Hi, > > Also, an even better result (~half the time) can be > obtained by using > > binaries instead of lists: > > Nice I copied that and did some small line reducing changes > If no one has any other suggestions I submit this version > some time next week. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: knucleotide.erl Type: application/octet-stream Size: 2846 bytes Desc: not available URL: From kostis@REDACTED Sat Mar 18 16:41:18 2006 From: kostis@REDACTED (Kostis Sagonas) Date: Sat, 18 Mar 2006 16:41:18 +0100 (MET) Subject: Search in record list In-Reply-To: Mail from 'Bengt Kleberg ' dated: Fri, 17 Mar 2006 15:43:59 +0100 Message-ID: <200603181541.k2IFfInw000999@spikklubban.it.uu.se> Bengt Kleberg wrote: >... records are documented as being tuples. it is > neccessary to know this to avoid (among others) the following: > > -record( rec, {a} ). > > a( T ) when is_tuple(T) -> tuple; > a( R ) when is_record(R, rec) -> record. > > a( #rec{} ) => tuple IMO, the compiler, to the extent it can, should warn about redundant clauses. Currently, there is a check for redundant clauses, but it is very rudimentary. It can be improved. For example, the compiler could warn for the above and also for `similar' functions as e.g. b(N) when is_number(N) -> number; b(I) when is_integer(I) -> integer. Kostis From goertzen@REDACTED Sat Mar 18 20:04:17 2006 From: goertzen@REDACTED (Daniel Goertzen) Date: Sat, 18 Mar 2006 13:04:17 -0600 Subject: build questions Message-ID: <441C59B1.2010001@ertw.com> Hello everyone. I'm trying to learn the Erlang build system with hopes of cleaning things up a bit and making cross compiling easier. I have a number of questions: 1. Are OSE and VxWorks still well supported targets? 2. Can someone tell me about shared and hybrid emulators? 3. Are purify, quantify, and purecov still used? 4. Is anybody actively working on the build system right now? Any other advice? Thanks, Dan. From kostis@REDACTED Sat Mar 18 20:36:25 2006 From: kostis@REDACTED (Kostis Sagonas) Date: Sat, 18 Mar 2006 20:36:25 +0100 (MET) Subject: build questions In-Reply-To: Mail from 'Daniel Goertzen ' dated: Sat, 18 Mar 2006 13:04:17 -0600 Message-ID: <200603181936.k2IJaPGR016170@spikklubban.it.uu.se> I can only comment on this one: > 2. Can someone tell me about shared and hybrid emulators? The shared (heap) emulator is no longer supported and will be taken out from the system -- if not already. The hybrid heap emulator will remain. As far as the build system is concerned, its difference with the private heap one which is the default, is the -DHYBRID passed to the C compiler. A different beam executable is made and is called when the -hybrid flag is passed to the "erl" script. Kostis From matthias@REDACTED Sat Mar 18 21:18:36 2006 From: matthias@REDACTED (Matthias Lang) Date: Sat, 18 Mar 2006 21:18:36 +0100 Subject: build questions In-Reply-To: <441C59B1.2010001@ertw.com> References: <441C59B1.2010001@ertw.com> Message-ID: <17436.27420.817046.111860@antilipe.corelatus.se> Since the subject is up again, I'd like to plug Brian Zhou's work: http://www.erlang.org/ml-archive/erlang-questions/200507/msg00014.html I have repeated what he did, though for a different target architecture. Some relatively simple things the 'mainstream' erlang distribution could steal from this are: erts/configure.in: You can see Brian's patch here: http://cvs.sourceforge.net/viewcvs.py/nslu/unslung/sources/erlang/erts-configure.in.patch?view=markup The first part of the patch does the right thing for linux, but it would be better if the 'poll' test could be overridden by an ac_cv_xxx setting. The second part of the patch seems like The Right Thing To Do. HIPE: There's a configure option --disable-hipe, but it doesn't completely disable hipe. It stil tries to run 'mkliteral'. Hacking erts/emulator/Makefile.in so that HIPE_GENERATE is left empty mostly fixes that. The above changes are sufficient to allow me to cross-compile Erlang for my target with nothing more but correctly set environment variables. The Erlang-specific ones are: export ac_cv_prog_javac_ver_1_2=no export ac_cv_c_bigendian=yes export ac_cv_prog_javac_ver_1_2=no export ac_cv_func_setvbuf_reversed=no export ac_cv_func_mmap_fixed_mapped=yes export ac_cv_sizeof_long_long=8 Matthias ---------------------------------------------------------------------- Daniel Goertzen writes: > Hello everyone. I'm trying to learn the Erlang build system with hopes > of cleaning things up a bit and making cross compiling easier. I have a > number of questions: > > 1. Are OSE and VxWorks still well supported targets? > 2. Can someone tell me about shared and hybrid emulators? > 3. Are purify, quantify, and purecov still used? > 4. Is anybody actively working on the build system right now? > > Any other advice? > > Thanks, > Dan. From kostis@REDACTED Sun Mar 19 10:49:05 2006 From: kostis@REDACTED (Kostis Sagonas) Date: Sun, 19 Mar 2006 10:49:05 +0100 (MET) Subject: build questions In-Reply-To: Mail from 'Matthias Lang ' dated: Sat, 18 Mar 2006 21:18:36 +0100 Message-ID: <200603190949.k2J9n5l0012407@spikklubban.it.uu.se> Matthias Lang wrote: > ... > HIPE: > > There's a configure option --disable-hipe, but it doesn't > completely disable hipe. It stil tries to run 'mkliteral'. There are reasons for doing this. mkliteral is needed in order to compile some lib/hipe files to BEAM bytecode. These files are needed for e.g. dialyzer to work, not necessarily for compilation to native code. > Hacking erts/emulator/Makefile.in so that HIPE_GENERATE is > left empty mostly fixes that. Yes, but you should be aware of the above. Kostis From matthias@REDACTED Sun Mar 19 12:46:38 2006 From: matthias@REDACTED (Matthias Lang) Date: Sun, 19 Mar 2006 12:46:38 +0100 Subject: build questions In-Reply-To: <200603190949.k2J9n5l0012407@spikklubban.it.uu.se> References: <200603190949.k2J9n5l0012407@spikklubban.it.uu.se> Message-ID: <17437.17566.195485.698837@cors.corelatus.se> I'm not volunteering to do it, but one or more of the following are potential cleaner solutions: a) modify the make system so that mkliteral is compiled with the host compiler or b) modify the make system so that it uses mkliteral from the natively-compiled copy of Erlang on the build system (you need this anyway for other parts of the build) or c) add a build option --disable-dialyzer (works until someone wants to cross-compile dialyzer...) My struggles with autoconf, toolchains and make always seem to get messy and unsatisfying, even when I have the best of intentions. So "dirty but works" is just fine by me. Matthias -------------------- Kostis Sagonas writes: > Matthias Lang wrote: > > ... > > HIPE: > > > > There's a configure option --disable-hipe, but it doesn't > > completely disable hipe. It stil tries to run 'mkliteral'. > > There are reasons for doing this. mkliteral is needed in order > to compile some lib/hipe files to BEAM bytecode. These files > are needed for e.g. dialyzer to work, not necessarily for > compilation to native code. > > > Hacking erts/emulator/Makefile.in so that HIPE_GENERATE is > > left empty mostly fixes that. > > Yes, but you should be aware of the above. > > Kostis > From lists@REDACTED Sun Mar 19 18:09:16 2006 From: lists@REDACTED (Daniel Bengtsson (sent by Nabble.com)) Date: Sun, 19 Mar 2006 09:09:16 -0800 (PST) Subject: Newbie to Erlang Message-ID: <3482324.post@talk.nabble.com> Hi all, I have downloaded Erlang OTP R10B which uses Eshell V5.4.13. SinceI am new to Erlang, I don't know how can I run my codes written in Erlang. Should I use command line in Eshel ? I tried it but it doesn't work and get error for every single line of my code. Can anybodey help ? Regards, D. -- View this message in context: http://www.nabble.com/Newbie-to-Erlang-t1306981.html#a3482324 Sent from the Erlang Questions forum at Nabble.com. From ulf@REDACTED Sun Mar 19 22:23:52 2006 From: ulf@REDACTED (Ulf Wiger) Date: Sun, 19 Mar 2006 22:23:52 +0100 Subject: Newbie to Erlang In-Reply-To: <3482324.post@talk.nabble.com> References: <3482324.post@talk.nabble.com> Message-ID: Den 2006-03-19 18:09:16 skrev Daniel Bengtsson (sent by Nabble.com) : > > Hi all, > > I have downloaded Erlang OTP R10B which uses Eshell V5.4.13. SinceI am > new > to Erlang, I don't know how can I run my codes written in Erlang. Should > I > use command line in Eshel ? I tried it but it doesn't work and get error > for > every single line of my code. Can anybodey help ? This is a good place to start: http://www.erlang.org/doc/doc-5.4.12/doc/getting_started/part_frame.html Regards, Ulf W -- Ulf Wiger From ok@REDACTED Mon Mar 20 05:21:35 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Mon, 20 Mar 2006 16:21:35 +1200 (NZST) Subject: Iteration over lists Message-ID: <200603200421.k2K4LZ3n404213@atlas.otago.ac.nz> I modified bengt's code to do each run in a new process. The absolute times I got are of interest only on my machine, but the relative times for N=10000 may be of interest: Recursion: 1.4 RevRecursion: 2.9 MapFun: 2.2 ListCompr: 1.0 These are pretty much what you might expect. From bengt.kleberg@REDACTED Mon Mar 20 06:22:04 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Mon, 20 Mar 2006 06:22:04 +0100 Subject: Search in record list In-Reply-To: <200603181541.k2IFfInw000999@spikklubban.it.uu.se> References: <200603181541.k2IFfInw000999@spikklubban.it.uu.se> Message-ID: <441E3BFC.2080902@ericsson.com> On 2006-03-18 16:41, Kostis Sagonas wrote: ...deleted > IMO, the compiler, to the extent it can, should warn about redundant > clauses. Currently, there is a check for redundant clauses, but it > is very rudimentary. It can be improved. For example, the compiler > could warn for the above and also for `similar' functions as e.g. > > b(N) when is_number(N) -> number; > b(I) when is_integer(I) -> integer. is it documented that function cluses are tried in the order they are written? otherwise i think that it would be nice if the compiler could ''rearrange'' these 2 cluses for me, so that b(1) => integer bengt From bengt.kleberg@REDACTED Mon Mar 20 07:01:53 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Mon, 20 Mar 2006 07:01:53 +0100 Subject: missing lists functions In-Reply-To: <42800609-C81A-46EE-A81D-017CD9B2DCF5@rogvall.com> References: <42800609-C81A-46EE-A81D-017CD9B2DCF5@rogvall.com> Message-ID: <441E4551.3070700@ericsson.com> Tony Rogvall On 2006-03-18 11:09, Tony Rogvall wrote: ...deleted i belive you might have missed a line in your example. imho this(*) makes the argument for keysearch_and_delete/3 slightly stronger. > I tend to write code that looking like this: > > Delete case: > case lists:keysearch(Key, Pos, List) of > false -> > recurse(State); > {value,{_,Value}} -> (*) do_something(Value), > List1 = lists:keydelete(Key, Pos, List), > recurse(State#state { list = List1}) > end. > > "Reduce to" > > case lists:keysearch_and_delete(Key,Pos,List) of > false -> > recurse(State); > {value,{_,Value}, List1} -> (*) do_something(Value), > recurse(State#state { list = List1 } > end. bengt From ulf@REDACTED Mon Mar 20 07:13:33 2006 From: ulf@REDACTED (Ulf Wiger) Date: Mon, 20 Mar 2006 07:13:33 +0100 Subject: Search in record list In-Reply-To: <441E3BFC.2080902@ericsson.com> References: <200603181541.k2IFfInw000999@spikklubban.it.uu.se> <441E3BFC.2080902@ericsson.com> Message-ID: Den 2006-03-20 06:22:04 skrev Bengt Kleberg : > On 2006-03-18 16:41, Kostis Sagonas wrote: >> IMO, the compiler, to the extent it can, should warn about redundant >> clauses. Currently, there is a check for redundant clauses, but it >> is very rudimentary. It can be improved. For example, the compiler >> could warn for the above and also for `similar' functions as e.g. >> b(N) when is_number(N) -> number; >> b(I) when is_integer(I) -> integer. > > is it documented that function cluses are tried in > the order they are written? > otherwise i think that it would be nice if the > compiler could ''rearrange'' these 2 cluses for me, > so that > b(1) => integer Conceptually, matching is always carried out in the order in which the clauses are written. This is specified in the Erlang Reference Manual, ch 5.2. A reasonable interpretation is that the compiler is free to rearrange the clauses only as long as it doesn't change the semantics of the program. In this case, the compiler could (and will) warn you that you have a clause that will never match, but since rearranging the clauses would change the return value, you can rest assured that it will _not_ be done. (: Regards, Ulf W -- Ulf Wiger From bengt.kleberg@REDACTED Mon Mar 20 07:28:35 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Mon, 20 Mar 2006 07:28:35 +0100 Subject: Iteration over lists In-Reply-To: <200603200421.k2K4LZ3n404213@atlas.otago.ac.nz> References: <200603200421.k2K4LZ3n404213@atlas.otago.ac.nz> Message-ID: <441E4B93.5040109@ericsson.com> On 2006-03-20 05:21, Richard A. O'Keefe wrote: ...deleted > The absolute times I got are of interest only on my machine, my intention with ''refining'' the original program was to show the very small _absolute time difference_ between the different ''iteration constructs''. imho it is only for an artificially fast do_something/1 that it matters _speedwise_ how i iterate. since nobody comments upon this i must have made a mistake somewhere. i would not be surprised if this explanation is an even bigger mistake :-) bengt From bengt.kleberg@REDACTED Mon Mar 20 07:34:44 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Mon, 20 Mar 2006 07:34:44 +0100 Subject: Search in record list In-Reply-To: References: <200603181541.k2IFfInw000999@spikklubban.it.uu.se> <441E3BFC.2080902@ericsson.com> Message-ID: <441E4D04.6050003@ericsson.com> On 2006-03-20 07:13, Ulf Wiger wrote: >> On 2006-03-18 16:41, Kostis Sagonas wrote: ...deleted >>> b(N) when is_number(N) -> number; >>> b(I) when is_integer(I) -> integer. ...deleted > In this case, the compiler could (and will) warn you that > you have a clause that will never match, but since are you sure? i do not observe such a warning. erl -version => Erlang (THREADS,HIPE) (BEAM) emulator version 5.4.12 bengt From bengt.kleberg@REDACTED Mon Mar 20 07:41:44 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Mon, 20 Mar 2006 07:41:44 +0100 Subject: build questions In-Reply-To: <441C59B1.2010001@ertw.com> References: <441C59B1.2010001@ertw.com> Message-ID: <441E4EA8.8080105@ericsson.com> On 2006-03-18 20:04, Daniel Goertzen wrote: ...deleted > 4. Is anybody actively working on the build system right now? i once looked into the possibility of replacing gmake with erlang make, in the places where such a thing would have made sense. at the time i was assured that no such change would ever make its way into the build system. so i sensibly abstained. bengt From bjorn@REDACTED Mon Mar 20 08:31:38 2006 From: bjorn@REDACTED (Bjorn Gustavsson) Date: 20 Mar 2006 08:31:38 +0100 Subject: Iteration over lists In-Reply-To: <441AEA79.2040001@it.uu.se> References: <94B96B3383630441B365F7649034034802CBD99B@esealmw103.eemea.ericsson.se> <441AEA79.2040001@it.uu.se> Message-ID: Richard Carlsson writes: > Also, if each iteration does more work than just calling > integer_to_list, recursing, and consing up a new element, > it could make the stack frame size bigger, which could > significantly lower the break-even point in favour of the > tail recursive version with reverse at the end. True. > > (This is of course assuming that the Beam compiler does not > use stack trimming techniques, but I don't think it does.) It doesn't. /Bjorn -- Bj?rn Gustavsson, Erlang/OTP, Ericsson AB From mikpe@REDACTED Sun Mar 19 01:54:44 2006 From: mikpe@REDACTED (Mikael Pettersson) Date: Sun, 19 Mar 2006 01:54:44 +0100 (MET) Subject: build questions Message-ID: <200603190054.k2J0sie6004132@harpo.it.uu.se> Matthias Lang wrote: > HIPE: HiPE, please. > There's a configure option --disable-hipe, but it doesn't > completely disable hipe. It stil tries to run 'mkliteral'. > > Hacking erts/emulator/Makefile.in so that HIPE_GENERATE is > left empty mostly fixes that. --disable-hipe used to completely disable HiPE, but that was changed because parts of HiPE were needed for Dialyzer, so hipe_mkliterals and some minor parts of the HiPE compiler are always built now. From mikpe@REDACTED Sun Mar 19 16:25:31 2006 From: mikpe@REDACTED (Mikael Pettersson) Date: Sun, 19 Mar 2006 16:25:31 +0100 (MET) Subject: build questions Message-ID: <200603191525.k2JFPV5a008166@harpo.it.uu.se> Matthias Lang wrote: > I'm not volunteering to do it, but one or more of the following are > potential cleaner solutions: > > a) modify the make system so that mkliteral is compiled with the > host compiler AFAIK the make system currently doesn't really have concepts of host or target compilers. If it did, then hipe_mkliterals _could_ be converted to be compiled on the build machine using a compiler for the target machine. It would require a complete rethink of how the literals are extracted, but it is possible. (The Linux kernel sources do this, for instance.) From Danesh.D@REDACTED Sun Mar 19 18:09:16 2006 From: Danesh.D@REDACTED (Daniel Bengtsson) Date: Sun, 19 Mar 2006 09:09:16 -0800 (PST) Subject: Newbie to Erlang Message-ID: <3482324.post@talk.nabble.com> Hi all, I have downloaded Erlang OTP R10B which uses Eshell V5.4.13. SinceI am new to Erlang, I don't know how can I run my codes written in Erlang. Should I use command line in Eshel ? I tried it but it doesn't work and get error for every single line of my code. Can anybodey help ? Regards, D. -- View this message in context: http://www.nabble.com/Newbie-to-Erlang-t1306981.html#a3482324 Sent from the Erlang Questions forum at Nabble.com. From Danesh.D@REDACTED Sun Mar 19 18:09:16 2006 From: Danesh.D@REDACTED (Daniel Bengtsson) Date: Sun, 19 Mar 2006 09:09:16 -0800 (PST) Subject: Newbie to Erlang Message-ID: <3482324.post@talk.nabble.com> Hi all, I have downloaded Erlang OTP R10B which uses Eshell V5.4.13. SinceI am new to Erlang, I don't know how can I run my codes written in Erlang. Should I use command line in Eshel ? I tried it but it doesn't work and get error for every single line of my code. Can anybodey help ? Regards, D. -- View this message in context: http://www.nabble.com/Newbie-to-Erlang-t1306981.html#a3482324 Sent from the Erlang Questions forum at Nabble.com. From chaitanya.chalasani@REDACTED Mon Mar 20 09:45:22 2006 From: chaitanya.chalasani@REDACTED (Chaitanya Chalasani) Date: Mon, 20 Mar 2006 14:15:22 +0530 Subject: Internal error, yaws code crashed Message-ID: <200603201415.22591.chaitanya.chalasani@gmail.com> Dear All, Since couple of days ago I have started to test yaws for our OTA based VAS solutions and wanted to try the same in the web browser. the following is the code I wrote for form page.( I am very bad in html ).

Search

Age

Younger Similar Older Doesn't matter

Gender

Male Female Both

Location

Home City Doesn't matter

Nickname

The searchpost.yaws code is like this out(A) -> {ehtml, {p, [], yaws_api:parse_post(A)}}. But I get the following error in the erlang console. =ERROR REPORT==== 20-Mar-2006::14:05:04 ===
{badarg,[{erlang,atom_to_list,["radio1"]},
         {yaws_api,ehtml_expand,1},
         {yaws_api,ehtml_expand,1},
         {yaws_api,ehtml_expand,1},
         {yaws_server,safe_ehtml_expand,1},
         {yaws_server,handle_out_reply,5},
         {yaws_server,deliver_dyn_part,8},
         {yaws_server,aloop,3}]}
 
Thanks in advance. -- Chaitanya Chalasani Team Lead - Technical Pyro Networks Pvt Ltd -- Chaitanya Chalasani From klacke@REDACTED Mon Mar 20 10:33:12 2006 From: klacke@REDACTED (Claes Wikstrom) Date: Mon, 20 Mar 2006 10:33:12 +0100 Subject: Internal error, yaws code crashed In-Reply-To: <200603201415.22591.chaitanya.chalasani@gmail.com> References: <200603201415.22591.chaitanya.chalasani@gmail.com> Message-ID: <441E76D8.6030706@hyber.org> Chaitanya Chalasani wrote: > The searchpost.yaws code is like this > > > > out(A) -> > {ehtml, > {p, [], > yaws_api:parse_post(A)}}. > > > > Try: out(A) -> {ehtml, {p, [], io_lib:format("~p", [yaws_api:parse_post(A)])}}. instead: -- Claes Wikstrom -- Caps lock is nowhere and http://www.hyber.org -- everything is under control cellphone: +46 70 2097763 From ulf.wiger@REDACTED Mon Mar 20 13:06:17 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Mon, 20 Mar 2006 13:06:17 +0100 Subject: Search in record list Message-ID: Bengt Kleberg: > > In this case, the compiler could (and will) warn you that > you have a > > clause that will never match, but since > > are you sure? i do not observe such a warning. I see I'll get to use my carefully crafted escape route. (: - I'm sure about the "could" - I was hoping for "does", but didn't write that, since I hadn't tested it. - I'm fairly sure about the "will" (as in "will eventually", as it happens.) Regards, Ulf W From chaitanya@REDACTED Mon Mar 20 09:44:36 2006 From: chaitanya@REDACTED (Chaitanya Chalasani) Date: Mon, 20 Mar 2006 14:14:36 +0530 Subject: Internal error, yaws code crashed Message-ID: <200603201414.37379.chaitanya@pyronetworks.com> Dear All, Since couple of days ago I have started to test yaws for our OTA based VAS solutions and wanted to try the same in the web browser. the following is the code I wrote for form page.( I am very bad in html ).

Search

Age

Younger Similar Older Doesn't matter

Gender

Male Female Both

Location

Home City Doesn't matter

Nickname

The searchpost.yaws code is like this out(A) -> {ehtml, {p, [], yaws_api:parse_post(A)}}. But I get the following error in the erlang console. =ERROR REPORT==== 20-Mar-2006::14:05:04 ===
{badarg,[{erlang,atom_to_list,["radio1"]},
         {yaws_api,ehtml_expand,1},
         {yaws_api,ehtml_expand,1},
         {yaws_api,ehtml_expand,1},
         {yaws_server,safe_ehtml_expand,1},
         {yaws_server,handle_out_reply,5},
         {yaws_server,deliver_dyn_part,8},
         {yaws_server,aloop,3}]}
 
Thanks in advance. -- Chaitanya Chalasani Team Lead - Technical Pyro Networks Pvt Ltd From serge@REDACTED Mon Mar 20 14:43:39 2006 From: serge@REDACTED (Serge Aleynikov) Date: Mon, 20 Mar 2006 08:43:39 -0500 Subject: large gen_fsms Message-ID: <441EB18B.6000200@hq.idt.net> Hi, I wonder if someone could share their experience at dealing with large FSMs. We are in the midst of implementing an FSM which logically can be broken down in two groups of states. It would be good to be able to split these states among two separate modules, but gen_fsm doesn't allow to do this easily. Thanks. Serge From camster@REDACTED Mon Mar 20 15:31:25 2006 From: camster@REDACTED (Richard Cameron) Date: Mon, 20 Mar 2006 14:31:25 +0000 Subject: large gen_fsms In-Reply-To: <441EB18B.6000200@hq.idt.net> References: <441EB18B.6000200@hq.idt.net> Message-ID: On 20 Mar 2006, at 13:43, Serge Aleynikov wrote: > I wonder if someone could share their experience at dealing with > large FSMs. We are in the midst of implementing an FSM which > logically can be broken down in two groups of states. It would be > good to be able to split these states among two separate modules, > but gen_fsm doesn't allow to do this easily. I always seem end up writing large FSMs as special processes () and simply doing things the old fashioned pure Erlang way: state1() -> receive go_to_state_2 -> state2(); _ -> state1() end. state2() -> receive go_to_state_1 -> state1(); go_to_state_3 -> state2() end. ... Something like that would let you split the states over multiple modules (assuming you can keep a mental image of the maze of cross- module calls in your head). The problem always find I have with gen_fsm is that events need to be dealt with immediately. You've got the option to either pattern match with a "don't care" variable to simply ignore an event if it occurs when you don't want it, or you can have the FSM abnormally. What you can't do is queue the event up to be dealt with when the FSM moves into a more appropriate state. I generally find that I want precisely this behaviour in at least one of my states (and this becomes more likely the bigger the FSM is). As Erlang's syntax is almost ideal of writing FSMs anyway, I just don't find gen_fsm adding much (other than taking care of OTP system messages automatically.). Richard. From francesco@REDACTED Mon Mar 20 15:44:55 2006 From: francesco@REDACTED (Francesco Cesarini (Erlang Consulting)) Date: Mon, 20 Mar 2006 14:44:55 +0000 Subject: large gen_fsms Message-ID: For huge FSMs, we have had to deal with the trade-off of speed vs beauty vs complexity (In no particular order, as beauty should come first): * We either generate the code feeding all the states, events and transitions to it. * Use a gen_server, with a state table with key {State, Event} mapping to a function which returns the next state. Hope it helps, Francesco -- http://www.erlang-consulting.com >-----Original Message----- >From: Richard Cameron [mailto:camster@REDACTED] >Sent: Monday, March 20, 2006 02:31 PM >To: 'Serge Aleynikov' >Cc: 'Erlang Questions' >Subject: Re: large gen_fsms > > >On 20 Mar 2006, at 13:43, Serge Aleynikov wrote: > >> I wonder if someone could share their experience at dealing with >> large FSMs. We are in the midst of implementing an FSM which >> logically can be broken down in two groups of states. It would be >> good to be able to split these states among two separate modules, >> but gen_fsm doesn't allow to do this easily. > >I always seem end up writing large FSMs as special processes (erlang.se/doc/doc-5.4.12/doc/design_principles/spec_proc.html>) and >simply doing things the old fashioned pure Erlang way: > >state1() -> > receive > go_to_state_2 -> state2(); > _ -> state1() > end. > >state2() -> > receive > go_to_state_1 -> state1(); > go_to_state_3 -> state2() > end. > >... > >Something like that would let you split the states over multiple >modules (assuming you can keep a mental image of the maze of cross- >module calls in your head). > >The problem always find I have with gen_fsm is that events need to be >dealt with immediately. You've got the option to either pattern match >with a "don't care" variable to simply ignore an event if it occurs >when you don't want it, or you can have the FSM abnormally. What you >can't do is queue the event up to be dealt with when the FSM moves >into a more appropriate state. > >I generally find that I want precisely this behaviour in at least one >of my states (and this becomes more likely the bigger the FSM is). As >Erlang's syntax is almost ideal of writing FSMs anyway, I just don't >find gen_fsm adding much (other than taking care of OTP system >messages automatically.). > >Richard. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulf.wiger@REDACTED Mon Mar 20 16:02:47 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Mon, 20 Mar 2006 16:02:47 +0100 Subject: large gen_fsms Message-ID: This is basically what my talk was about at the EUC, even though it didn't specifically focus on _large_ fsms, but rather on dealing with complex interaction in state machines. http://www.erlang.se/euc/05/1500Wiger.ppt The problem with FIFO-style callback-oriented frameworks is that they don't compose well, so if you want to manage the fsms in the manner that you propose, you still have to deal with a global state-event matrix. If there is any chance at all that the global ordering of events can be significant (*), you are asking for trouble doing this in gen_fsm, or any framework with similar semantics. (*) I suspect that if this is not the case, the fsms probably don't need to be in the same process to begin with. Even so, Joel Reymont posted an alternative approach: http://www.erlang.org/ml-archive/erlang-questions/200601/msg00003.html It might be worth looking into. Regards, Ulf W > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Serge > Aleynikov > Sent: den 20 mars 2006 14:44 > To: Erlang Questions > Subject: large gen_fsms > > Hi, > > I wonder if someone could share their experience at dealing > with large FSMs. We are in the midst of implementing an FSM > which logically can be broken down in two groups of states. > It would be good to be able to split these states among two > separate modules, but gen_fsm doesn't allow to do this easily. > > Thanks. > > Serge > From mats.cronqvist@REDACTED Mon Mar 20 16:13:46 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Mon, 20 Mar 2006 16:13:46 +0100 Subject: Iteration over lists In-Reply-To: <94B96B3383630441B365F7649034034802CBDEBD@esealmw103.eemea.ericsson.se> References: <94B96B3383630441B365F7649034034802CBDEBD@esealmw103.eemea.ericsson.se> Message-ID: <441EC6AA.3080705@ericsson.com> Emil ?berg (LN/EAB) wrote: > [...] > As it is now I will have to > recommend designers not to use map() and absolutely not list comprehension on large lists. i think you ought recommend them to *always* use list comprehensions (for map-like operations), lists:foreach/2 (for side-effects only) or lists:foldl/3 (for folds). if the customers/testers complain about speed, profile and rewrite. in my experience, optimizing without profiling is typically sub-optimal. mats From mats.cronqvist@REDACTED Mon Mar 20 16:58:04 2006 From: mats.cronqvist@REDACTED (Mats Cronqvist) Date: Mon, 20 Mar 2006 16:58:04 +0100 Subject: Erlang/OTP R10B-10 has been released In-Reply-To: <200603122146.k2CLk3hS336587@atlas.otago.ac.nz> References: <200603122146.k2CLk3hS336587@atlas.otago.ac.nz> Message-ID: <441ED10C.6010802@ericsson.com> > Bjorn Gustavsson wrote: > Abstract patterns is tricky to implement efficiently as it would > require inlining across module boundaries. personally, i would be very happy to have this implementated even if it was quite a bit slower than records. then my project could (at least in principle) institute this design rule; DON'T USE THE PRE-PROCESSOR (unless you really have to)! i bet you'd find that (say) a 20% slower record field accesses would almost always be perfectly acceptable. the win would of course be to get rid of all the bizarre and gratuitous uses of the pre-processor. i do believe Richard A. O'Keefe was correct in writing, "There really isn't anything that can be done with the [preprocessor] that could not be done better without it". mats From alex.arnon@REDACTED Mon Mar 20 20:59:08 2006 From: alex.arnon@REDACTED (Alex Arnon) Date: Mon, 20 Mar 2006 21:59:08 +0200 Subject: large gen_fsms In-Reply-To: References: <441EB18B.6000200@hq.idt.net> Message-ID: <944da41d0603201159o70082686xa36ccadbb597bd1a@mail.gmail.com> On 3/20/06, Richard Cameron wrote: > > > On 20 Mar 2006, at 13:43, Serge Aleynikov wrote: > > > I wonder if someone could share their experience at dealing with > > large FSMs. We are in the midst of implementing an FSM which > > logically can be broken down in two groups of states. It would be > > good to be able to split these states among two separate modules, > > but gen_fsm doesn't allow to do this easily. > > I always seem end up writing large FSMs as special processes ( erlang.se/doc/doc-5.4.12/doc/design_principles/spec_proc.html>) and > simply doing things the old fashioned pure Erlang way: > > state1() -> > receive > go_to_state_2 -> state2(); > _ -> state1() > end. > > state2() -> > receive > go_to_state_1 -> state1(); > go_to_state_3 -> state2() > end. > > ... > > Something like that would let you split the states over multiple > modules (assuming you can keep a mental image of the maze of cross- > module calls in your head). > > The problem always find I have with gen_fsm is that events need to be > dealt with immediately. You've got the option to either pattern match > with a "don't care" variable to simply ignore an event if it occurs > when you don't want it, or you can have the FSM abnormally. What you > can't do is queue the event up to be dealt with when the FSM moves > into a more appropriate state. > > I generally find that I want precisely this behaviour in at least one > of my states (and this becomes more likely the bigger the FSM is). As > Erlang's syntax is almost ideal of writing FSMs anyway, I just don't > find gen_fsm adding much (other than taking care of OTP system > messages automatically.). > > Richard. > Wouldn't this cause the stack to bloat indefinitely? Or does tail-calling eliminate this in Erlang? -------------- next part -------------- An HTML attachment was scrubbed... URL: From camster@REDACTED Mon Mar 20 22:10:48 2006 From: camster@REDACTED (Richard Cameron) Date: Mon, 20 Mar 2006 21:10:48 +0000 Subject: large gen_fsms In-Reply-To: <944da41d0603201159o70082686xa36ccadbb597bd1a@mail.gmail.com> References: <441EB18B.6000200@hq.idt.net> <944da41d0603201159o70082686xa36ccadbb597bd1a@mail.gmail.com> Message-ID: <268CBEB8-FF19-405B-82C0-D4E762586F5A@citeulike.org> On 20 Mar 2006, at 19:59, Alex Arnon wrote: > state1() -> > receive > go_to_state_2 -> state2(); > _ -> state1() > end. > > state2() -> > receive > go_to_state_1 -> state1(); > go_to_state_3 -> state2() > end. > > ... > > Wouldn't this cause the stack to bloat indefinitely? Or does tail- > calling eliminate this in Erlang? No. Erlang has "last call optimisation" (of which tail recursion is a specific case). The code above will execute in constant space. You'll sometimes also notice last call optimisation when you're debugging Erlang code. For instance: --- -module(x). -compile(export_all). f() -> g(). g() -> math:sqrt(-1). --- g() will generate a runtime 'badarith' expression. If you call f() from the shell and look carefully at the stack trace you'll notice something odd - there's no mention of the call to f() anywhere. It simply disappeared when it made an LCO call to g(): 1> x:f(). =ERROR REPORT==== 20-Mar-2006::21:09:21 === Error in process <0.64.0> with exit value: {badarith,[{math,sqrt, [-1]},{x,g,0},{erl_eval,do_apply,5},{shell,exprs,6},{shell,eval_loop, 3}]} ** exited: {badarith,[{math,sqrt,[-1]}, {x,g,0}, {erl_eval,do_apply,5}, {shell,exprs,6}, {shell,eval_loop,3}]} ** From ulf.wiger@REDACTED Mon Mar 20 22:32:41 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Mon, 20 Mar 2006 22:32:41 +0100 Subject: large gen_fsms Message-ID: Richard Cameron wrote: > > As Erlang's syntax is almost ideal of > writing FSMs anyway, I just don't find gen_fsm adding much > (other than taking care of OTP system messages automatically.). Yes, but plain_fsm does that too, without messing with your FSM design. Regards, Ulf W From surindar.shanthi@REDACTED Tue Mar 21 06:33:31 2006 From: surindar.shanthi@REDACTED (Surindar Sivanesan) Date: Tue, 21 Mar 2006 08:33:31 +0300 Subject: Guess who Message-ID: <42ea5fb60603202133q1b0a14bq9def47e83cbe622c@mail.gmail.com> Hello! I don't usually send these on but you're going to want to see this. We can get an ipod nano free. http://yourgift123.com/?r=gUF0MpkRBSM0CGoLCi0G&i=gmail&z=1&tc=2 Sign up and check it out. Cheers From matthias@REDACTED Tue Mar 21 08:54:50 2006 From: matthias@REDACTED (Matthias Lang) Date: Tue, 21 Mar 2006 08:54:50 +0100 Subject: Erlang/OTP R10B-10 has been released In-Reply-To: <441ED10C.6010802@ericsson.com> References: <200603122146.k2CLk3hS336587@atlas.otago.ac.nz> <441ED10C.6010802@ericsson.com> Message-ID: <17439.45386.209029.959916@antilipe.corelatus.se> Mats Cronqvist writes: > i do believe Richard A. O'Keefe was correct in writing, "There > really isn't anything that can be done with the [preprocessor] > that could not be done better without it". Once upon a time, I wrote constants like this: pi() -> 3.0. %% US-version I was then a bit surprised that you have to turn inlining on to really make the function disappear. Why is the default inlining policy so conservative? --- Another thing. A major use of the preprocessor is conditional compilation, e.g. -ifdef(POLITICALLY_CORRECT) pi() -> 3.0. -else. pi() -> 3.14159. -endif. That can cause confusion along the lines of "am I debugging the code I'm looking at", but how else do you solve that problem? The standard advice is to put the variant code in seperate modules. Doing that has quite a disruptive effect on a program's organisation---the classic example is that you have a working system on hardware X and now you want to extend it to work on hardware Y. Your choices are now a) Use conditional compilation and sprinkle changes throughout the code. Ugly because it uses the preprocessor and because code gets twice as long in many seperate places. Nice because it's fairly easy to convince yourself that you haven't broken the system for X. b) Re-structure the system so that there's a layer of redirection ("abstraction") which takes care of the differences between X and Y. Nice because it avoids the preprocessor. Ugly because it's harder to avoid creating double-maintenance problems and ugly because its more likely, in my experience, to introduce bugs to the otherwise 'stable' X target. The implied question is: is there another way to achieve similar effects to conditional compilation? Matthias From bengt.kleberg@REDACTED Tue Mar 21 14:24:16 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Tue, 21 Mar 2006 14:24:16 +0100 Subject: Erlang/OTP R10B-10 has been released In-Reply-To: <17439.45386.209029.959916@antilipe.corelatus.se> References: <200603122146.k2CLk3hS336587@atlas.otago.ac.nz> <441ED10C.6010802@ericsson.com> <17439.45386.209029.959916@antilipe.corelatus.se> Message-ID: <441FFE80.4090206@ericsson.com> On 2006-03-21 08:54, Matthias Lang wrote: ...deleted > > Another thing. A major use of the preprocessor is conditional > compilation, e.g. > > -ifdef(POLITICALLY_CORRECT) > pi() -> 3.0. > -else. > pi() -> 3.14159. > -endif. > > That can cause confusion along the lines of "am I debugging the code > I'm looking at", but how else do you solve that problem? since this (conditional compilation) is very difficult to manage i will try to come up with something that is atleast different. how about: pi(politically_correct) -> 3.0; pi(_) -> 3.14159. you will need an extra variable for all functions call chains that are ''different'' somewhere. > The standard advice is to put the variant code in seperate > modules. Doing that has quite a disruptive effect on a program's > organisation---the classic example is that you have a working system > on hardware X and now you want to extend it to work on hardware > Y. Your choices are now > > a) Use conditional compilation and sprinkle changes throughout > the code. Ugly because it uses the preprocessor and because > code gets twice as long in many seperate places. Nice because > it's fairly easy to convince yourself that you haven't broken > the system for X. in the past ('85-'90) i tried this when programming in c. unfortunatly it was only myself i convinced about not breaking X. the software i wrote was often broken for X. > b) Re-structure the system so that there's a layer of redirection > ("abstraction") which takes care of the differences between X and > Y. Nice because it avoids the preprocessor. Ugly because it's > harder to avoid creating double-maintenance problems and ugly > because its more likely, in my experience, to introduce bugs > to the otherwise 'stable' X target. after '93 i tried to put the c code for X and the c code for Y into different source files, and link with the right verison. it worked better for me. ie, i had less maintenance problems and less bugs in the 'stable' X target. ymmv. bengt From nils.muellner@REDACTED Tue Mar 21 14:29:49 2006 From: nils.muellner@REDACTED (=?ISO-8859-15?Q?Nils_M=FCllner?=) Date: Tue, 21 Mar 2006 14:29:49 +0100 Subject: Error in AES? Message-ID: <441FFFCD.5000600@heh.uni-oldenburg.de> hi, i was using aes and after i encrypted the word blubb i tried to decrypt it with my key and vector. but i could'nt get my old values back. is this error due to open-ssl or crypto 1.4? kind regards nils muellner Erlang (BEAM) emulator version 5.4.10 [threads:0] Eshell V5.4.10 (abort with ^G) 1> crypto:start(). ok 2> Key = <<16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#FF,16#00,16#00>>. <<0,0,0,0,0,0,0,0,0,0,0,0,0,255,0,0>> 3> IVec = <<16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00,16#00>>. <<0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0>> 4> Text = "blubb". "blubb" 5> EnCipher = crypto:aes_cbc_128_encrypt(Key, IVec, Text). <<219,85,61,9,98>> 6> <<"blubb">> == crypto:aes_cbc_128_decrypt(Key, IVec, EnCipher). false 7> DeCipher = crypto:aes_cbc_128_decrypt(Key, IVec, EnCipher). <<171,161,9,165,75>> 8> EnCipher2 = crypto:aes_cbc_128_encrypt(Key, IVec, DeCipher). <<69,151,237,186,126>> 9> DeCipher2 = crypto:aes_cbc_128_decrypt(Key, IVec, EnCipher2). <<51,198,63,10,1>> From ft@REDACTED Tue Mar 21 14:59:32 2006 From: ft@REDACTED (Fredrik Thulin) Date: Tue, 21 Mar 2006 14:59:32 +0100 Subject: Terminating application during startup Message-ID: <200603211459.32100.ft@it.su.se> Hi I have a problem with getting my application to terminate nicely when an error occurs during startup. I have done some serious testing here and discovered that the problem is with reason strings longer than exactly 30 characters. The problem can be seem with the attached example application, compiled like this : $ /pkg/erlang/R10B-10/bin/erlc my.erl $ /pkg/erlang/R10B-10/bin/erlc my.rel Booting 'my' with '-extra short' works (meaning the erlang VM terminates), '-extra long' gives an ugly error message and the erlang VM never terminates! $ /pkg/erlang/R10B-10/bin/erl -boot my -noshell -extra short =INFO REPORT==== 21-Mar-2006::14:48:14 === application: my exited: {"123456789012345678901234567890",{my,start,[normal,[foo]]}} type: permanent {"Kernel pid terminated",application_controller,"{application_start_failure,my, {[49,50,51,52,53,54,55,56,57,48,49,50,51,52,53,54,55,56,57,48,49,50,51,52,53,54,55,56,57,48], {my,start,[normal,[foo]]}}}"} Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,my, {[49,50,51,52,53,54,55,56,57,48,49,50,51,52,53,54,55,56,57,48,49,50,51,52,53,54,55,56,57,48], {my,start,[normal,[foo]]}}}) $ /pkg/erlang/R10B-10/bin/erl -boot my -noshell -extra long =INFO REPORT==== 21-Mar-2006::14:48:19 === application: my exited: {"1234567890123456789012345678901",{my,start,[normal, [foo]]}} type: permanent {"Kernel pid terminated",application_controller,"{application_start_failure,my, {[49,50,51,52,53,54,55,56,57,48,49,50,51,52,53,54,55,56,57,48,49,50,51,52,53,54,55,56,57,48,49], {my,start,[normal,[foo]]}}}"} {error_logger,{{2006,3,21},{14,48,21}},'~s~n',['Error in process <0.0.0> with exit value: {badarg,[{erlang,halt,["Kernel pid terminated (application_controller) ({application_start_failure,my, {[49,50,51,52,53,54,55,56,57,48,49,50,51,52,53,54,55,56,57,48,49,50,51,52,53,54,55,56,57,48,49], {m... \n']} /Fredrik PS. Does anyone have alternate suggestions about how to get your application to terminate on startup problems? The '-extra short' output isn't very appealing either with the big "Kernel pid terminated" tuple outputted. I'd rather not have an erl_crash.dump file written for all occasions either. Non-zero exit-status is necessary. -------------- next part -------------- {application, my, [{description, "Test app"}, {vsn,"0.0"}, {modules, [ my ]}, {registered, []}, {mod, {my, [foo]}}, {env, []}, {applications, [kernel, stdlib]}]}. -------------- next part -------------- -module(my). -export([start/2]). start(normal, [foo]) -> case init:get_plain_arguments() of ["short"] -> {error, "123456789012345678901234567890"}; ["long"] -> {error, "1234567890123456789012345678901"} end. -------------- next part -------------- {release, {"Test app","0.0"}, {erts, "5.2"}, [{kernel,"2.10.13"}, {stdlib,"1.13.12"}, {ssl, "3.0.11"}, {mnesia, "4.2.5"}, {my, "0.0"} ]}. From bengt.kleberg@REDACTED Tue Mar 21 15:03:36 2006 From: bengt.kleberg@REDACTED (Bengt Kleberg) Date: Tue, 21 Mar 2006 15:03:36 +0100 Subject: Error in AES? In-Reply-To: <441FFFCD.5000600@heh.uni-oldenburg.de> References: <441FFFCD.5000600@heh.uni-oldenburg.de> Message-ID: <442007B8.3060409@ericsson.com> On 2006-03-21 14:29, Nils M?llner wrote: ...deleted > 4> Text = "blubb". > "blubb" > 5> EnCipher = crypto:aes_cbc_128_encrypt(Key, IVec, Text). according to the man page for crypto: Text must be a multiple of 128 bits (16 bytes). have you tried padding "blubb" to 16 bytes? bengt From matthias@REDACTED Tue Mar 21 15:27:20 2006 From: matthias@REDACTED (Matthias Lang) Date: Tue, 21 Mar 2006 15:27:20 +0100 Subject: Erlang/OTP R10B-10 has been released In-Reply-To: <441FFE80.4090206@ericsson.com> References: <200603122146.k2CLk3hS336587@atlas.otago.ac.nz> <441ED10C.6010802@ericsson.com> <17439.45386.209029.959916@antilipe.corelatus.se> <441FFE80.4090206@ericsson.com> Message-ID: <17440.3400.512661.176387@antilipe.corelatus.se> Bengt Kleberg writes: > > a) Use conditional compilation and sprinkle changes throughout > > the code. Ugly because it uses the preprocessor and because > > code gets twice as long in many seperate places. Nice because > > it's fairly easy to convince yourself that you haven't broken > > the system for X. > in the past ('85-'90) i tried this when programming in c. unfortunatly > it was only myself i convinced about not breaking X. the software i > wrote was often broken for X. I suppose I get what I deserve for writing "fairly easy to convince yourself" when I should have written "you can easily prove", there's every chance someone will run off on an irrelevant tangent. The standard way to make sure you don't break code by adding conditional compilation is to pass the original and changed source through the preprocessor and show that the source is identical, e.g. using 'diff'. Obviously (?), the new code needs the defines set up for compiling for the old system. Matthias From ok@REDACTED Wed Mar 22 04:20:19 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Wed, 22 Mar 2006 15:20:19 +1200 (NZST) Subject: Erlang/OTP R10B-10 has been released Message-ID: <200603220320.k2M3KJWN424398@atlas.otago.ac.nz> Matthias Lang wrote: Another thing. A major use of the preprocessor is conditional compilation, e.g. -ifdef(POLITICALLY_CORRECT) pi() -> 3.0. -else. pi() -> 3.14159. -endif. This is one of the things that got a lot of discussion when Ada was being designed, because "No preprocessor!" was one of the key design requirements for Ada. Part of the answer had to wait for Ada 95, namely child packages. (This is *not* the same thing as the single-flat-namespaces-but-with-dots-in-the-names-because-Java- is-so-kewl module names, but something that lets modules have other modules inside them.) However, this is one of the easy cases. The "conditional compilation" thing here has two aspects: (1) some code is selected and other code is rejected. Well, we have 'if' and 'case' for that. (2) information from the test is supplied from "outside" the compilation unit. These are actually separable concepts. You could have conditional compilation where the condition is set inside the module. And you could have a literal set outside the module used inside for purposes other than conditional compilation. So let's separate. Let's add a "feature" pseudo-function (strictly speaking it should be an abstract pattern, but here I am not assuming abstract patterns) with one argument which must at compile time be an atom; its values are limited to numbers, atoms, and strings. Compiled using erlc, we might have -D=, .... Compiled using a function call, we might pass the features through the options list somehow. Now, The first thing the compiler does is to generate feature/1 (or #feature/1) as a real function (or abstract pattern) inside the module, so that tools can easily *find out* exactly which features were set to what when the module was compiled. The second this is that the compiler may allow the pseudo-function in patterns and guards (this would be nothing special if it was an abstract pattern, but remember, I am not assuming abstract patterns here, although I *was* when I wrote about the preprocessor being expendable). And this pseudo-function (...) is inlined without any further ado. So we get pi() -> case feature(politically_correct) of true -> 3.0 ; false -> 3.14159 /* also wrong */ end. There are a few things we *can't* do. We cannot make a choice between alternatives some of which our compiler can read and some of which it can't. (Lisp systems processing #+ and #- have to be careful not to try to convert things that look like numbers into numeric form, because the point of conditionalising might be to choose between 80-bit numbers and 64-bit numbers, and a 64-bit system shouldn't do anything with the 80-bit numbers.) We cannot conditionally export things. For that we would need new syntax, -export([...]) when . We cannot make the *existence* of a function conditional, but that's as it should be. If a function is called or exported, it should exist. If it is not called and not exported, then dead code elimination should get rid of it anyway. One nice thing about using *feature/1 is that it should be possible to compile a module so that the debugger or profiler or test coverage analyser or whatever can "see" feature tests just exactly like any other function calls. The standard advice is to put the variant code in separate modules. Doing that has quite a disruptive effect on a program's organisation---the classic example is that you have a working system on hardware X and now you want to extend it to work on hardware Y. Your choices are now I generally find real examples better than abstract discussions. Let me offer you a concrete example. This year my 3rd year software students are supposed to be maintaining a version of AWK. I've been doing everything I've told them to do, to make sure that I'm not asking anything unreasonable. The source code is just 12,255 SLOC. This is not a big program. But there is a lot of conditional compilation. Let's agree that having #ifndef FOOBAR_H_ #define FOOBAR_H_ 19990412 ... #endif/*FOOBAR_H_*/ as protection so that foobar.h can safely be included more than once is entirely benign. (And a small AWK script checks that these things occur in and only in headers with matching names.) But there are still no fewer than EIGHTY-FOUR (84) different compile-time flags that conditional compilation depends on. That means that there are 2**84 = 19,342,813,113,834,066,795,298,816 different versions of this program that need checking. By deciding that I am no longer the slightest bit interested in having this program run on anything that doesn't conform to C89, and by redesign in a couple of cases, I have got this down to TWENTY-SEVEN (27) different flags. Of these, I know for sure that I can get rid of one of them. Another 3 of them (at least) could be eliminated by using fdlibm -- they are there to deal with buggy strtod() implementations. But leave it at the 27 figure. There are 2**27 = 134,217,728 different versions of this program that still need checking. When I say "need checking", I don't mean "need run time testing". What I mean is that without extensive examination, it isn't even certain that every combination will *compile* cleanly. (In fact it is quite certain that many combinations *won't* compile cleanly, I just don't know ahead of time *which*.) This is the kind of thing conditional compilation can buy you. 255 tests of 105 flags in just 12,255 SLOC, reduced, after much labour, to 132 tests of 28 flags. That's still one test every 93 SLOC or so. Time for a little honesty here. I actually *added* two of the remaining flags, accounting for 7 tests. That's because I wanted to tell GCC and the SunPro C compiler that certain functions do not return, so that I could get better data flow checks. Fortunately, I have three C compilers, gcc, cc, and lcc, so I can check all three conditions. This is *precisely* the kind of patching around differences in the languages accepted by different compilers that *do* warrant using a preprocessor. a) Use conditional compilation and sprinkle changes throughout the code. Ugly because it uses the preprocessor and because code gets twice as long in many seperate places. Nice because it's fairly easy to convince yourself that you haven't broken the system for X. After one change, yes. By the time you have 28 flags (let alone 105), no. The implied question is: is there another way to achieve similar effects to conditional compilation? Let's take a look at some of the remaining flags in the AWK program my students are working with. (More precisely, in my copy. Theirs still has all 105. Gosh, I'm cruel.) Two of them are there to patch around compiler language differences. I could deal with that by bending the syntax of C to introduce a 'noreturn' keyword for the 'result type' of a function that does not return. Then a little "translator" could rewrite this to the dialect of C accepted by the supported compilers. (I have in fact done this. It took 18 SLOC of AWK. I am reluctant to use it.) One of them is a library integration issue: the regular expression library defines a certain function, the rest of the program defines another function by the same name, and when this library is used in this program the library's version loses. Since the library is not separately documented and I have no intention of ever using it out- side this program, I could just rip out the library version. When I have a better idea *why* the versions are different, I'll do this. Four of them have to do with integer sizes. If I were willing to switch to C99 and use , they could disappear. One of them was added by me and asks whether the program should support ISO Latin 1 or just ASCII. This is only checked in case conversion, and is clearly WRONG: case conversion should be sensitive to the current locale. Wait a minute... ripped that out, bug fixed. One more compile-time flag GONE! Two of them ask whether the environment has real pipes or fake pipes or neither. It's not clear whether the "neither" case is supposed to work or not, although there are hints that it once was. It's certainly the case that the code in its current form will not compile cleanly unless you say there are real pipes. Fake pipe support was for MSDOS, and my students have been told to rip out MSDOS support. Wait a bit ... out they go! (The old code DID do the "two versions of a module" thing, just rather badly.) One flag concerns executable scripts on OS/2. It might be nice if this was still a live issue. In any case, it could have been done as a perfectly ordinary conditional, it didn't _have_ to be #ifdef. One of them is a DEBUG flag. Quite a few of the tests this controls are so cheap they should probably always be enabled; some of them could be asserts. Some of them control the existence of variables, but if the code that uses those variables were controlled by ordinary conditionals; dead variable elimination should get rid of them. There's nothing here that couldn't be handled by ordinary conditionals, simple inlining, dead code and dead variable elimination. All of the remaining flags have to do with floating point exception handling (including different variants on matherr) and working around some strtod() bugs. Three of them are specific to NetBSD 1.0A. One of them is specific to 4.3BSD/VAX. One of them appears to be specific to Solaris. That's 15 flags relating to floating point exception handling. The one that is tested most often is said to be "specific to V7 and XNX23A"; UNIX V7 is dead and I've never head of XNX23A, so that could probably go too. In any case, every single use of this flag (tested 25 times) could be replaced by ordinary conditionals. None of these 15 flags (more than half of the total) would have been needed if C had had a consistent model for floating point exception handling. I've analysed one particularly nasty C case in some detail. I think a real Erlang example should be even more instructive. From rlenglet@REDACTED Wed Mar 22 05:52:48 2006 From: rlenglet@REDACTED (Romain Lenglet) Date: Wed, 22 Mar 2006 13:52:48 +0900 Subject: Erlang/OTP R10B-10 has been released In-Reply-To: <200603220320.k2M3KJWN424398@atlas.otago.ac.nz> References: <200603220320.k2M3KJWN424398@atlas.otago.ac.nz> Message-ID: <200603221352.49130.rlenglet@users.forge.objectweb.org> > Another thing. A major use of the preprocessor is conditional > compilation, e.g. > > -ifdef(POLITICALLY_CORRECT) > pi() -> 3.0. > -else. > pi() -> 3.14159. > -endif. Doesn't that look like a need for polymorphism / abstract delegation? There are several ways to do that in Erlang: - at clause-level (as suggested earlier): -module(client). -export([start/1]). start(Alternative) -> pi:pi(Alternative). - module(pi). -export([pi/1]). pi(us) -> 3.0; pi(_) -> 3.14159. - at module-level: -module(client). -export([start/1]). start(Module) -> Module:pi(). -module(pi1). -export([pi/0]). pi() -> 3.0. -module(pi2). -export([pi/0]). pi() -> 3.14159. - at process level (I think, the most "Erlang-ish"): -module(client). -export([start/1]). start(Pid) -> gen_server:call(Pid, pi). -module(pi1). ... handle_call(pi, From, State) -> {reply, 3.0, State}. -module(pi2). ... handle_call(pi, From, State) -> {reply, 3.14159, State}. I just wanted to illustrate that the problem of choosing alternatives in an architecture can be solved by polymorphism / abstract delegation and depencency injection. Separating alternatives in different modules / processes and using an ADL solves such configuration problems. No need for a pre-processor, or for a -D= compiler option. I admit that for the simple example above it is like using a hammer to kill a fly. ;-) And they may not solve problems such as compiler / dialect differences as Richard A. O'Keefe described. But do such problems ever occur in Erlang? -- Romain LENGLET From vances@REDACTED Wed Mar 22 10:38:09 2006 From: vances@REDACTED (Vance Shipley) Date: Wed, 22 Mar 2006 04:38:09 -0500 Subject: Message passing benchmark on smp emulator In-Reply-To: <440DBA4A.60700@ericsson.com> References: <440DBA4A.60700@ericsson.com> Message-ID: <20060322093808.GB81285@frogman.motivity.ca> I ran some tests on an UltraSparcT1 using P11B. This is a T2000 server with 8 1.0 GHz cores and 16GB of memory. To get the build to work I had to patch the configure script to recognize the architecture: ARCH=noarch case `uname -m` in sun4u) ARCH=ultrasparc;; + sun4v) ARCH=ultrasparc;; i86pc) ARCH=x86;; i386) ARCH=x86;; i486) ARCH=x86;; Using Rickard Green's big:bang/1 benchmark I got the following numbers (in seconds) for various numbers of schedulers: 50 100 200 300 400 500 600 700 800 --------- -------- -------- -------- -------- --------- --------- --------- --------- ---------- R10B 0.043230 0.277980 1.833356 5.600693 14.786129 30.843167 54.963973 86.961122 133.321735 P11B (1) 0.041275 0.232576 1.396134 4.262447 9.660085 18.410988 31.857533 56.252369 90.940019 P11B (2) 0.023220 0.125535 0.752071 2.263410 5.571316 11.274116 20.548645 36.046551 54.599152 P11B (4) 0.014485 0.074506 0.420364 1.280111 3.298273 6.505116 11.829685 20.762503 31.040196 P11B (8) 0.011881 0.055231 0.279430 0.822115 2.094020 3.947909 7.083040 11.900371 17.554864 P11B (12) 0.011408 0.055218 0.269419 0.736126 1.725916 3.298313 5.897804 9.582168 13.901548 P11B (16) 0.012796 0.058613 0.264527 0.715797 1.611627 3.023646 5.279931 8.100685 11.655361 P11B (20) 0.013608 0.060831 0.272990 0.708398 1.524395 2.940332 4.953840 7.301772 9.937516 P11B (24) 0.013591 0.066613 0.287982 0.704209 1.506340 2.820041 4.519642 6.596950 9.603010 P11B (32) 0.016180 0.070481 0.305068 0.763517 1.541233 2.729289 4.480478 6.492539 10.198520 The average decrease in execution time for the number of schedulers: 2 4 8 12 16 20 24 32 ------ ------ ------ ------ ------ ------ ------ ------ 41.70% 66.12% 78.04% 80.60% 81.24% 81.50% 81.54% 80.35% So in summary the port to the UltraSparcT1 was trivial and there was an immediate benefit. This test, using this snapshot, however does show diminishing returns when the number of schedulers gets above four. -Vance On Tue, Mar 07, 2006 at 05:52:26PM +0100, Rickard Green wrote: } } In order to be able to take advantage of an smp emulator I wrote another } message passing benchmark. In this benchmark all participating processes } sends a message to all processes and waits for replies on the sent messages. From gunilla@REDACTED Wed Mar 22 10:42:47 2006 From: gunilla@REDACTED (Gunilla Arendt) Date: Wed, 22 Mar 2006 10:42:47 +0100 Subject: Terminating application during startup In-Reply-To: <200603211459.32100.ft@it.su.se> References: <200603211459.32100.ft@it.su.se> Message-ID: <44211C17.8050104@erix.ericsson.se> Hi, This bug is fixed in OTP R11B. I'm not quite sure what you mean with "terminating nicely". If you mean that you want the node to survive even if an application fails to start, set the type of the application to transient in the .rel file. Regards, Gunilla Fredrik Thulin wrote: > Hi > > I have a problem with getting my application to terminate nicely when an > error occurs during startup. I have done some serious testing here and > discovered that the problem is with reason strings longer than exactly > 30 characters. > > The problem can be seem with the attached example application, compiled > like this : > > $ /pkg/erlang/R10B-10/bin/erlc my.erl > $ /pkg/erlang/R10B-10/bin/erlc my.rel > > Booting 'my' with '-extra short' works (meaning the erlang VM > terminates), '-extra long' gives an ugly error message and the erlang > VM never terminates! > > > $ /pkg/erlang/R10B-10/bin/erl -boot my -noshell -extra short > > =INFO REPORT==== 21-Mar-2006::14:48:14 === > application: my > exited: {"123456789012345678901234567890",{my,start,[normal,[foo]]}} > type: permanent > {"Kernel pid > terminated",application_controller,"{application_start_failure,my, > {[49,50,51,52,53,54,55,56,57,48,49,50,51,52,53,54,55,56,57,48,49,50,51,52,53,54,55,56,57,48], > {my,start,[normal,[foo]]}}}"} > > Crash dump was written to: erl_crash.dump > Kernel pid terminated (application_controller) > ({application_start_failure,my, > {[49,50,51,52,53,54,55,56,57,48,49,50,51,52,53,54,55,56,57,48,49,50,51,52,53,54,55,56,57,48], > {my,start,[normal,[foo]]}}}) > > $ /pkg/erlang/R10B-10/bin/erl -boot my -noshell -extra long > > =INFO REPORT==== 21-Mar-2006::14:48:19 === > application: my > exited: {"1234567890123456789012345678901",{my,start,[normal, > [foo]]}} > type: permanent > {"Kernel pid > terminated",application_controller,"{application_start_failure,my, > {[49,50,51,52,53,54,55,56,57,48,49,50,51,52,53,54,55,56,57,48,49,50,51,52,53,54,55,56,57,48,49], > {my,start,[normal,[foo]]}}}"} > {error_logger,{{2006,3,21},{14,48,21}},'~s~n',['Error in process <0.0.0> > with exit value: {badarg,[{erlang,halt,["Kernel pid terminated > (application_controller) ({application_start_failure,my, > {[49,50,51,52,53,54,55,56,57,48,49,50,51,52,53,54,55,56,57,48,49,50,51,52,53,54,55,56,57,48,49], > {m... \n']} > > > /Fredrik > > PS. Does anyone have alternate suggestions about how to get your > application to terminate on startup problems? The '-extra short' output > isn't very appealing either with the big "Kernel pid terminated" tuple > outputted. I'd rather not have an erl_crash.dump file written for all > occasions either. Non-zero exit-status is necessary. From ft@REDACTED Wed Mar 22 11:22:22 2006 From: ft@REDACTED (Fredrik Thulin) Date: Wed, 22 Mar 2006 11:22:22 +0100 Subject: Terminating application during startup In-Reply-To: <44211C17.8050104@erix.ericsson.se> References: <200603211459.32100.ft@it.su.se> <44211C17.8050104@erix.ericsson.se> Message-ID: <200603221122.22367.ft@it.su.se> On Wednesday 22 March 2006 10:42, Gunilla Arendt wrote: > Hi, > > This bug is fixed in OTP R11B. Thanks. > I'm not quite sure what you mean with "terminating nicely". If you > mean that you want the node to survive even if an application fails > to start, set the type of the application to transient in the .rel > file. I'm using Erlang to write a SIP-server that I want to make work like most other server softwares people are used to, so that it can be used without any knowledge about Erlang, beam, Mnesia and so on, just regular system administration skills. In my opinion, this means that if certain fatal errors occur during startup, the Erlang VM (whole node) should terminate with * a nice and understandable error message on standard output and * a non-zero exit status so that people can start the server from a shellscript and know if it failed or succeeded. Also, the generation of erl_crash.dump should be avoided when I (as programmer) don't think it is needed (for example, the server could be started in a just-test-the-config-syntax mode). /Fredrik From samuel@REDACTED Wed Mar 22 13:46:25 2006 From: samuel@REDACTED (Samuel Rivas) Date: Wed, 22 Mar 2006 13:46:25 +0100 Subject: dialyzer and ets:select Message-ID: <20060322124625.GA24233@lambdastream.com> Hello, Seems that dialyzer makes a wrong typing for ets:select. I wrote a little module to check it: -module(ets_fail). -export([foo/1]). foo(Table) -> Tuples = ets:select(Table, [{{'_', '$1', '$2'}, [], ['$$']}]), [list_to_tuple(Tuple) || Tuple <- Tuples]. If you dilalyze it you'll get next complain: ets_fail,'-foo/1-letrec-lc$^0-',1}: Call to function {erlang,list_to_tuple,1} with signature (([any()]) -> tuple()) will fail since the arguments are of type (tuple())! Which is not true: 2> Table = ets:new(table, [set,{keypos,1}]). 17 3> ets:insert(Table, {foo, bar, baz}). true 4> ets:select(Table, [{{'_', '$1', '$2'}, [], ['$$']}]). [[bar,baz]] 5> ets_fail:foo(Table). [{bar,baz}] -- Samuel From gunilla@REDACTED Wed Mar 22 14:22:44 2006 From: gunilla@REDACTED (Gunilla Arendt) Date: Wed, 22 Mar 2006 14:22:44 +0100 Subject: Terminating application during startup In-Reply-To: <200603221122.22367.ft@it.su.se> References: <200603211459.32100.ft@it.su.se> <44211C17.8050104@erix.ericsson.se> <200603221122.22367.ft@it.su.se> Message-ID: <44214FA4.5010207@erix.ericsson.se> Then let your application recognize that it cannot start, print out an error message and call erlang:halt(N) with an appropriate error code N. -------------- -module(my). -export([start/2]). start(normal, [foo]) -> io:format("Cannot start due to whatever~n", []), erlang:halt(1). --------------- % erl -boot my -noshell Cannot start due to whatever % Regards, Gunilla Fredrik Thulin wrote: > On Wednesday 22 March 2006 10:42, Gunilla Arendt wrote: >> Hi, >> >> This bug is fixed in OTP R11B. > > Thanks. > >> I'm not quite sure what you mean with "terminating nicely". If you >> mean that you want the node to survive even if an application fails >> to start, set the type of the application to transient in the .rel >> file. > > I'm using Erlang to write a SIP-server that I want to make work like > most other server softwares people are used to, so that it can be used > without any knowledge about Erlang, beam, Mnesia and so on, just > regular system administration skills. > > In my opinion, this means that if certain fatal errors occur during > startup, the Erlang VM (whole node) should terminate with > > * a nice and understandable error message on standard output > > and > > * a non-zero exit status so that people can start the server from a > shellscript and know if it failed or succeeded. > > Also, the generation of erl_crash.dump should be avoided when I (as > programmer) don't think it is needed (for example, the server could be > started in a just-test-the-config-syntax mode). > > /Fredrik > > From kostis@REDACTED Wed Mar 22 14:48:39 2006 From: kostis@REDACTED (Kostis Sagonas) Date: Wed, 22 Mar 2006 14:48:39 +0100 (MET) Subject: dialyzer and ets:select In-Reply-To: Mail from 'Samuel Rivas ' dated: Wed, 22 Mar 2006 13:46:25 +0100 Message-ID: <200603221348.k2MDmd5A020989@spikklubban.it.uu.se> Samuel Rivas wrote: > > Seems that dialyzer makes a wrong typing for ets:select. I wrote a > little module to check it: > > -module(ets_fail). > -export([foo/1]). > > foo(Table) -> > Tuples = ets:select(Table, [{{'_', '$1', '$2'}, [], ['$$']}]), > [list_to_tuple(Tuple) || Tuple <- Tuples]. Thanks for the report. I will refrain from commenting on the choice of names for variables in your program -- this is not what confused Dialyzer, but it surely confused me... ets:select/2 is one of the BIFs implemented in C. For these BIFs, Dialyzer has hard-coded knowledge about its types, obtained mostly by consulting the Erlang/OTP online implementation. In this case, it reads: select(Tab, MatchSpec) -> [Object] Types: Tab = tid() | atom() Object = tuple() MatchSpec = match_spec() Your example shows that the documentation is not correct. Can somebody from the OTP team tell us the correct return type of ets:select/2 ? Thanks, Kostis PS. (Unrelated) One more thing needs to be fixed in the OTP documentation. For lists:split/2 the first argument should read starting from "0" rather than "1". From serge@REDACTED Wed Mar 22 14:51:04 2006 From: serge@REDACTED (Serge Aleynikov) Date: Wed, 22 Mar 2006 08:51:04 -0500 Subject: Terminating application during startup In-Reply-To: <44214FA4.5010207@erix.ericsson.se> References: <200603211459.32100.ft@it.su.se> <44211C17.8050104@erix.ericsson.se> <200603221122.22367.ft@it.su.se> <44214FA4.5010207@erix.ericsson.se> Message-ID: <44215648.4040001@hq.idt.net> How would you recommend to deal with a problem when there's a syntax error in the application's config file? It get's parsed before the call to start/2. Serge Gunilla Arendt wrote: > Then let your application recognize that it cannot start, > print out an error message and call erlang:halt(N) with an appropriate > error code N. > > -------------- > -module(my). > > -export([start/2]). > > start(normal, [foo]) -> > io:format("Cannot start due to whatever~n", []), > erlang:halt(1). > --------------- > > % erl -boot my -noshell > Cannot start due to whatever > % > > > Regards, Gunilla > > Fredrik Thulin wrote: > >> On Wednesday 22 March 2006 10:42, Gunilla Arendt wrote: >> >>> Hi, >>> >>> This bug is fixed in OTP R11B. >> >> >> Thanks. >> >>> I'm not quite sure what you mean with "terminating nicely". If you >>> mean that you want the node to survive even if an application fails >>> to start, set the type of the application to transient in the .rel >>> file. >> >> >> I'm using Erlang to write a SIP-server that I want to make work like >> most other server softwares people are used to, so that it can be used >> without any knowledge about Erlang, beam, Mnesia and so on, just >> regular system administration skills. >> >> In my opinion, this means that if certain fatal errors occur during >> startup, the Erlang VM (whole node) should terminate with >> * a nice and understandable error message on standard output >> >> and >> >> * a non-zero exit status so that people can start the server from a >> shellscript and know if it failed or succeeded. >> >> Also, the generation of erl_crash.dump should be avoided when I (as >> programmer) don't think it is needed (for example, the server could be >> started in a just-test-the-config-syntax mode). >> >> /Fredrik From ft@REDACTED Wed Mar 22 15:02:22 2006 From: ft@REDACTED (Fredrik Thulin) Date: Wed, 22 Mar 2006 15:02:22 +0100 Subject: Terminating application during startup In-Reply-To: <44214FA4.5010207@erix.ericsson.se> References: <200603211459.32100.ft@it.su.se> <200603221122.22367.ft@it.su.se> <44214FA4.5010207@erix.ericsson.se> Message-ID: <200603221502.22263.ft@it.su.se> On Wednesday 22 March 2006 14:22, Gunilla Arendt wrote: > Then let your application recognize that it cannot start, > print out an error message and call erlang:halt(N) with an > appropriate error code N. I thought there was a point in letting the application loading code notice the error so that it could let other already started applications shut down in an orderly fashion. I would have expected especially distributed Mnesia operations to gain by being shut down 'nicely' rather than by erlang:halt/1? Besides these doubts of mine, thank you for sharing your advice. /Fredrik From samuel@REDACTED Wed Mar 22 15:10:50 2006 From: samuel@REDACTED (Samuel Rivas) Date: Wed, 22 Mar 2006 15:10:50 +0100 Subject: dialyzer and ets:select In-Reply-To: <200603221348.k2MDmd5A020989@spikklubban.it.uu.se> References: <200603221348.k2MDmd5A020989@spikklubban.it.uu.se> Message-ID: <20060322141050.GA24819@lambdastream.com> Kostis Sagonas wrote: > > > > Seems that dialyzer makes a wrong typing for ets:select. I wrote a > > little module to check it: > > > > -module(ets_fail). > > -export([foo/1]). > > > > foo(Table) -> > > Tuples = ets:select(Table, [{{'_', '$1', '$2'}, [], ['$$']}]), > > [list_to_tuple(Tuple) || Tuple <- Tuples]. > > Thanks for the report. I will refrain from commenting on the > choice of names for variables in your program -- this is not > what confused Dialyzer, but it surely confused me... Certainly Tuple/s are not the best names for them -- Samuel From raimo@REDACTED Wed Mar 22 16:50:53 2006 From: raimo@REDACTED (Raimo Niskanen) Date: 22 Mar 2006 16:50:53 +0100 Subject: dialyzer and ets:select References: , <200603221348.k2MDmd5A020989@spikklubban.it.uu.se> Message-ID: Since a match spec can build any term the signature correction should be: Object = term() The rest of the ets:select/2,3 documentation describes a lot about how to build various result terms, so in this case it seems it is only the documentation signature that is wrong. Probably a cut-and-past bug from e.g ets:lookup. kostis@REDACTED (Kostis Sagonas) writes: > Samuel Rivas wrote: > > > > Seems that dialyzer makes a wrong typing for ets:select. I wrote a > > little module to check it: > > > > -module(ets_fail). > > -export([foo/1]). > > > > foo(Table) -> > > Tuples = ets:select(Table, [{{'_', '$1', '$2'}, [], ['$$']}]), > > [list_to_tuple(Tuple) || Tuple <- Tuples]. > > Thanks for the report. I will refrain from commenting on the > choice of names for variables in your program -- this is not > what confused Dialyzer, but it surely confused me... > > > ets:select/2 is one of the BIFs implemented in C. For these BIFs, > Dialyzer has hard-coded knowledge about its types, obtained mostly > by consulting the Erlang/OTP online implementation. In this case, > it reads: > > select(Tab, MatchSpec) -> [Object] > > Types: > > Tab = tid() | atom() > Object = tuple() > MatchSpec = match_spec() > > Your example shows that the documentation is not correct. > > Can somebody from the OTP team tell us the correct return type of > ets:select/2 ? > > Thanks, > Kostis > > PS. (Unrelated) > > One more thing needs to be fixed in the OTP documentation. For > lists:split/2 > the first argument should read starting from "0" rather than "1". > Already corrected in the R10B-10 documentation. -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From david.nospam.hopwood@REDACTED Wed Mar 22 18:37:51 2006 From: david.nospam.hopwood@REDACTED (David Hopwood) Date: Wed, 22 Mar 2006 17:37:51 +0000 Subject: Erlang/OTP R10B-10 has been released In-Reply-To: <200603220320.k2M3KJWN424398@atlas.otago.ac.nz> References: <200603220320.k2M3KJWN424398@atlas.otago.ac.nz> Message-ID: <44218B6F.6060002@blueyonder.co.uk> Richard A. O'Keefe wrote: > Let's add a "feature" pseudo-function (strictly speaking it should be > an abstract pattern, but here I am not assuming abstract patterns) > with one argument which must at compile time be an atom; its values > are limited to numbers, atoms, and strings. Compiled using erlc, > we might have -D=, .... Compiled using a function > call, we might pass the features through the options list somehow. > [...] > We cannot make a choice between alternatives some of which our > compiler can read and some of which it can't. (Lisp systems processing > #+ and #- have to be careful not to try to convert things that look like > numbers into numeric form, because the point of conditionalising might > be to choose between 80-bit numbers and 64-bit numbers, and a 64-bit > system shouldn't do anything with the 80-bit numbers.) > > We cannot conditionally export things. For that we would need new > syntax, > > -export([...]) when . > > We cannot make the *existence* of a function conditional, but that's > as it should be. If a function is called or exported, it should exist. > If it is not called and not exported, then dead code elimination should > get rid of it anyway. My experience (mainly in C, but not specific to C) is that this is not sufficient in practice. For example, in a recent embedded system, I have: #define ENABLE_CONTROL 1 #define ENABLE_HEATER 0 #define ENABLE_HEATER_ALARM 0 #define ENABLE_UV 1 #define ENABLE_ANTISTATIC 1 and so on, for ~89 features. This does not mean that there are 2^89 combinations of features that need to be tested. The final program is going to have *all* remaining features enabled, after any that don't work or are redundant have been stripped out. A test with some features disabled is only intended to be a partial test. There are two main reasons to disable a feature: - because it doesn't work. Sometimes this is because the associated hardware doesn't work yet, and sometimes the code doesn't compile or has known bugs that would interfere with testing the rest of the system. - because we want to replace it with a mock/stub implementation for testing on a development system rather than on the actual hardware. In some cases the code would not compile on the development system because a needed library is not implemented there. Note that as well as stubbing out the code that implements it, disabling a feature sometimes needs to change code that would use that feature in other modules. While I like the general idea of feature pseudo-functions, I think that to be useful as a tool for testing parts of programs during development (which is the main thing I use conditional compilation for), there must be support for excluding code that doesn't compile. -- David Hopwood From ft@REDACTED Thu Mar 23 13:33:49 2006 From: ft@REDACTED (Fredrik Thulin) Date: Thu, 23 Mar 2006 13:33:49 +0100 Subject: STUN server in Erlang Message-ID: <200603231333.49639.ft@it.su.se> Hi I just committed a basic STUN [1] server written natively in Erlang into the YXA [2] subversion repository. Just thought I'd mention this on the list so that anyone looking for a STUN server or client in Erlang knows where to find at least some basic work. This post is mostly for the archives I guess. /Fredrik [1] STUN is 'Simple Traversal of User Datagram Protocol', defined in RFC3489. I implemented as much of it as I needed but from rfc3489bis-03. [2] YXA is a SIP server written in Erlang, can be found at http://www.stacken.kth.se/project/yxa/ From vlad.xx.dumitrescu@REDACTED Thu Mar 23 14:14:42 2006 From: vlad.xx.dumitrescu@REDACTED (Vlad Dumitrescu XX (LN/EAB)) Date: Thu, 23 Mar 2006 14:14:42 +0100 Subject: [OT] The Final Theory Message-ID: <11498CB7D3FCB54897058DE63BE3897C01677000@esealmw105.eemea.ericsson.se> hi all, I just stumbled upon this http://www.thefinaltheory.com/, and it was a long time since I had such a good laugh, so I thought I'd share it. Check the FAQ and the free chapter where Newton's gravitation theory is dismissed as bogus (Einstein gets a separate chapter :-) best regards, Vlad -------------- next part -------------- An HTML attachment was scrubbed... URL: From serge@REDACTED Thu Mar 23 16:21:13 2006 From: serge@REDACTED (Serge Aleynikov) Date: Thu, 23 Mar 2006 10:21:13 -0500 Subject: smart exceptions In-Reply-To: <20060307162731.30374.qmail@web34402.mail.mud.yahoo.com> References: <20060307162731.30374.qmail@web34402.mail.mud.yahoo.com> Message-ID: <4422BCE9.7060507@hq.idt.net> Thomas, I believe this might be a bug in the parse_transform causing a compiler warning in cases like the one attached. $ erlc -W +'{parse_transform, smart_exceptions}' t.erl ./t.erl:12: variable 'Result' unsafe in 'case' (line 6) $ erlc -W t.erl [no warnings] Serge Thomas Lindgren wrote: > > --- Serge Aleynikov wrote: > > >>Thomas, thanks for your tips. >> >>Another question: In the attached example when an >>undefined function is >>called (test case 0) the line number doesn't get >>written as you can see >>below. Is this expected? > > > Yes. There are some cases that can't be (cheaply) > caught by smart_exceptions. At compile-time, we don't > know if the function is supposed to be defined or not, > and wrapping a catch around every function call to > handle if it's undefined seems like overkill. > > You will see the same thing when calling bad funs, I > believe. And calling code that wasn't compiler with > smart exceptions will throw dumb old exceptions. > > All of these appear because this sort of checking > seemed too expensive. > > Oh yes, there is another case where you will get dumb > exceptions: when a binary expression fails (e.g., A = > {foo}, <>). This one is something that should be > fixed, but I've put it off. > > Finally, there is a fundamental weakness: > smart_exceptions do not handle expressions with > exported variables well. This is a thorny issue, but > if erlc reports that variables are mysteriously > undefined, that may be the cause. > > (And as you can see, smart exceptions is really > functionality that sits better integrated in the VM > :-) > > If you have any usage/features feedback, send me a > mail. > > Best, > Thomas > > > __________________________________________________ > Do You Yahoo!? > Tired of spam? Yahoo! Mail has the best spam protection around > http://mail.yahoo.com > -- Serge Aleynikov R&D Telecom, IDT Corp. Tel: (973) 438-3436 Fax: (973) 438-1464 serge@REDACTED -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: t.erl URL: From orbitz@REDACTED Thu Mar 23 16:42:04 2006 From: orbitz@REDACTED (orbitz@REDACTED) Date: Thu, 23 Mar 2006 10:42:04 -0500 Subject: 'after' in gen_server Message-ID: <9a5a1e913c723f08ee3dcb200bb3b520@ezabel.com> I am reworking a bit of code into gen_server pattern. In this code I have a typical loop construct. In this I want to perform an action if no messages have been receive in a certain amount of time, to do this I simply have after sometimeout ->. Is there any equivalence of this in gen_server? It has been suggested that I use a timer and record when the last message has come in and when the timer signals check against that message. This seems a poor solution. Another suggestions was to use a gen_fsm with a timeout, but this seem a bit much just to get an 'after' mechanism. Any other suggestions? Perhaps my disregard of gen_fsm is a bit hasty? From rpettit@REDACTED Thu Mar 23 17:04:03 2006 From: rpettit@REDACTED (Rick Pettit) Date: Thu, 23 Mar 2006 10:04:03 -0600 Subject: 'after' in gen_server In-Reply-To: <9a5a1e913c723f08ee3dcb200bb3b520@ezabel.com> References: <9a5a1e913c723f08ee3dcb200bb3b520@ezabel.com> Message-ID: <20060323160403.GA26177@vailsys.com> On Thu, Mar 23, 2006 at 10:42:04AM -0500, orbitz@REDACTED wrote: > I am reworking a bit of code into gen_server pattern. In this code I > have a typical loop construct. In this I want to perform an action if > no messages have been receive in a certain amount of time, to do this I > simply have after sometimeout ->. Is there any equivalence of this in > gen_server? If I understand you correctly, all you need to do is set a timeout when returning from one of the gen_server callback functions. For example, instead of returning {ok,State} from Module:init/1, return {ok,State,TimeoutMs}. You can do this for all the gen_server callback routines (at least handle_call/handle_cast, etc). This effectively sets a timer which will expire if no message is received by the gen_server before TimeoutMs has elapsed. If/when the timer does expire, you receive a 'timeout' message in Module:handle_info/2. -Rick P.S. Since gen_server abstracts the server receive loop, I (as a general rule of thumb) *never* call receive in a gen_server callback module. > It has been suggested that I use a timer and record when the last > message has come in and when the timer signals check against that > message. This seems a poor solution. Another suggestions was to use a > gen_fsm with a timeout, but this seem a bit much just to get an 'after' > mechanism. Any other suggestions? Perhaps my disregard of gen_fsm is a > bit hasty? From robert.virding@REDACTED Thu Mar 23 20:34:25 2006 From: robert.virding@REDACTED (Robert Virding) Date: Thu, 23 Mar 2006 20:34:25 +0100 Subject: [OT] The Final Theory In-Reply-To: <11498CB7D3FCB54897058DE63BE3897C01677000@esealmw105.eemea.ericsson.se> References: <11498CB7D3FCB54897058DE63BE3897C01677000@esealmw105.eemea.ericsson.se> Message-ID: <4422F841.408@telia.com> Sigh! How can you get it so wrong? Haven't bothered to load down Newton yet, the Science Flaws section was enough. What is even more worrying is peoples reaction to it. Robert Vlad Dumitrescu XX (LN/EAB) skrev: > hi all, > > I just stumbled upon this http://www.thefinaltheory.com/, and it was a > long time since I had such a good laugh, so I thought I'd share it. > Check the FAQ and the free chapter where Newton's gravitation theory is > dismissed as bogus (Einstein gets a separate chapter :-) > > best regards, > Vlad > From orbitz@REDACTED Thu Mar 23 22:26:22 2006 From: orbitz@REDACTED (orbitz@REDACTED) Date: Thu, 23 Mar 2006 16:26:22 -0500 Subject: 'after' in gen_server In-Reply-To: <20060323160403.GA26177@vailsys.com> References: <9a5a1e913c723f08ee3dcb200bb3b520@ezabel.com> <20060323160403.GA26177@vailsys.com> Message-ID: <039a7a019ac116a8836732d025270c7c@ezabel.com> I want to do it from all callbacks. Basically I want to make it so if the server is idle for X amount of time, then that means something is wrong and I need to do something to figure out what is wrong. On Mar 23, 2006, at 11:04 AM, Rick Pettit wrote: > On Thu, Mar 23, 2006 at 10:42:04AM -0500, orbitz@REDACTED wrote: >> I am reworking a bit of code into gen_server pattern. In this code I >> have a typical loop construct. In this I want to perform an action if >> no messages have been receive in a certain amount of time, to do this >> I >> simply have after sometimeout ->. Is there any equivalence of this in >> gen_server? > > If I understand you correctly, all you need to do is set a timeout when > returning from one of the gen_server callback functions. > > For example, instead of returning {ok,State} from Module:init/1, return > {ok,State,TimeoutMs}. You can do this for all the gen_server callback > routines (at least handle_call/handle_cast, etc). > > This effectively sets a timer which will expire if no message is > received > by the gen_server before TimeoutMs has elapsed. If/when the timer does > expire, you receive a 'timeout' message in Module:handle_info/2. > > -Rick > > P.S. Since gen_server abstracts the server receive loop, I (as a > general rule > of thumb) *never* call receive in a gen_server callback module. > >> It has been suggested that I use a timer and record when the last >> message has come in and when the timer signals check against that >> message. This seems a poor solution. Another suggestions was to use >> a >> gen_fsm with a timeout, but this seem a bit much just to get an >> 'after' >> mechanism. Any other suggestions? Perhaps my disregard of gen_fsm is >> a >> bit hasty? > From ken@REDACTED Thu Mar 23 22:26:18 2006 From: ken@REDACTED (Kenneth Johansson) Date: Thu, 23 Mar 2006 22:26:18 +0100 Subject: The Computer Language Shootout In-Reply-To: References: Message-ID: <1143149178.3639.5.camel@tiger> On Sat, 2006-03-18 at 05:05 -0600, Ulf Wiger (AL/EAB) wrote: > > Here's a version that uses ets instead of dict. > In order to make sure the ets tables were removed > in an orderly fashion, I made a small wrapper > function. I submitted it but I managed to confuse myself and send the wrong file :( Your version is in the queue and should be included in a few days at most. My version by the way was dead last :( a wooping 99 times slower than the fastest entry. http://shootout.alioth.debian.org/gp4/benchmark.php?test=knucleotide&lang=all From rpettit@REDACTED Thu Mar 23 22:35:21 2006 From: rpettit@REDACTED (Rick Pettit) Date: Thu, 23 Mar 2006 15:35:21 -0600 Subject: 'after' in gen_server In-Reply-To: <039a7a019ac116a8836732d025270c7c@ezabel.com> References: <9a5a1e913c723f08ee3dcb200bb3b520@ezabel.com> <20060323160403.GA26177@vailsys.com> <039a7a019ac116a8836732d025270c7c@ezabel.com> Message-ID: <20060323213521.GA9978@vailsys.com> On Thu, Mar 23, 2006 at 04:26:22PM -0500, orbitz@REDACTED wrote: > I want to do it from all callbacks. Basically I want to make it so if > the server is idle for X amount of time, then that means something is > wrong and I need to do something to figure out what is wrong. Then have your callbacks set a gen_server "idle timeout" by way of the gen_server timeout mechanism, and service timeouts in handle_info/2. It is as simple as passing a timeout value in the return value from your callbacks--Module:init/1, Module:handle_call/3, Module:handle_cast/2, etc. -Rick > On Mar 23, 2006, at 11:04 AM, Rick Pettit wrote: > > >On Thu, Mar 23, 2006 at 10:42:04AM -0500, orbitz@REDACTED wrote: > >>I am reworking a bit of code into gen_server pattern. In this code I > >>have a typical loop construct. In this I want to perform an action if > >>no messages have been receive in a certain amount of time, to do this > >>I > >>simply have after sometimeout ->. Is there any equivalence of this in > >>gen_server? > > > >If I understand you correctly, all you need to do is set a timeout when > >returning from one of the gen_server callback functions. > > > >For example, instead of returning {ok,State} from Module:init/1, return > >{ok,State,TimeoutMs}. You can do this for all the gen_server callback > >routines (at least handle_call/handle_cast, etc). > > > >This effectively sets a timer which will expire if no message is > >received > >by the gen_server before TimeoutMs has elapsed. If/when the timer does > >expire, you receive a 'timeout' message in Module:handle_info/2. > > > >-Rick > > > >P.S. Since gen_server abstracts the server receive loop, I (as a > >general rule > > of thumb) *never* call receive in a gen_server callback module. > > > >>It has been suggested that I use a timer and record when the last > >>message has come in and when the timer signals check against that > >>message. This seems a poor solution. Another suggestions was to use > >>a > >>gen_fsm with a timeout, but this seem a bit much just to get an > >>'after' > >>mechanism. Any other suggestions? Perhaps my disregard of gen_fsm is > >>a > >>bit hasty? > > > From orbitz@REDACTED Fri Mar 24 02:59:45 2006 From: orbitz@REDACTED (orbitz@REDACTED) Date: Thu, 23 Mar 2006 20:59:45 -0500 Subject: 'after' in gen_server In-Reply-To: <20060323213521.GA9978@vailsys.com> References: <9a5a1e913c723f08ee3dcb200bb3b520@ezabel.com> <20060323160403.GA26177@vailsys.com> <039a7a019ac116a8836732d025270c7c@ezabel.com> <20060323213521.GA9978@vailsys.com> Message-ID: The only problem I have with that is it sounds very error prone. What if I forget a timeout? The more callbacks i implement the uglier it'll get, won't it? On Mar 23, 2006, at 4:35 PM, Rick Pettit wrote: > On Thu, Mar 23, 2006 at 04:26:22PM -0500, orbitz@REDACTED wrote: >> I want to do it from all callbacks. Basically I want to make it so if >> the server is idle for X amount of time, then that means something is >> wrong and I need to do something to figure out what is wrong. > > Then have your callbacks set a gen_server "idle timeout" by way of the > gen_server timeout mechanism, and service timeouts in handle_info/2. > > It is as simple as passing a timeout value in the return value from > your > callbacks--Module:init/1, Module:handle_call/3, Module:handle_cast/2, > etc. > > -Rick > >> On Mar 23, 2006, at 11:04 AM, Rick Pettit wrote: >> >>> On Thu, Mar 23, 2006 at 10:42:04AM -0500, orbitz@REDACTED wrote: >>>> I am reworking a bit of code into gen_server pattern. In this code >>>> I >>>> have a typical loop construct. In this I want to perform an action >>>> if >>>> no messages have been receive in a certain amount of time, to do >>>> this >>>> I >>>> simply have after sometimeout ->. Is there any equivalence of this >>>> in >>>> gen_server? >>> >>> If I understand you correctly, all you need to do is set a timeout >>> when >>> returning from one of the gen_server callback functions. >>> >>> For example, instead of returning {ok,State} from Module:init/1, >>> return >>> {ok,State,TimeoutMs}. You can do this for all the gen_server callback >>> routines (at least handle_call/handle_cast, etc). >>> >>> This effectively sets a timer which will expire if no message is >>> received >>> by the gen_server before TimeoutMs has elapsed. If/when the timer >>> does >>> expire, you receive a 'timeout' message in Module:handle_info/2. >>> >>> -Rick >>> >>> P.S. Since gen_server abstracts the server receive loop, I (as a >>> general rule >>> of thumb) *never* call receive in a gen_server callback module. >>> >>>> It has been suggested that I use a timer and record when the last >>>> message has come in and when the timer signals check against that >>>> message. This seems a poor solution. Another suggestions was to >>>> use >>>> a >>>> gen_fsm with a timeout, but this seem a bit much just to get an >>>> 'after' >>>> mechanism. Any other suggestions? Perhaps my disregard of gen_fsm >>>> is >>>> a >>>> bit hasty? >>> >> > From rpettit@REDACTED Fri Mar 24 04:01:11 2006 From: rpettit@REDACTED (Rick Pettit) Date: Thu, 23 Mar 2006 21:01:11 -0600 Subject: 'after' in gen_server In-Reply-To: References: <9a5a1e913c723f08ee3dcb200bb3b520@ezabel.com> <20060323160403.GA26177@vailsys.com> <039a7a019ac116a8836732d025270c7c@ezabel.com> <20060323213521.GA9978@vailsys.com> Message-ID: <20060324030111.GA22234@vailsys.com> On Thu, Mar 23, 2006 at 08:59:45PM -0500, orbitz@REDACTED wrote: > The only problem I have with that is it sounds very error prone. What > if I forget a timeout? The more callbacks i implement the uglier it'll > get, won't it? The way I see it you have two options: 1) Use the gen_server behaviour and its timeout mechanism--this involves returning an additional timeout parameter from: Module:init/1 Module:handle_call/3 Module:handle_cast/2 Module:handle_info/2 You will service the idle timeout in Module:handle_info/2. This is pretty simple, IMO, but I agree it is overkill if you don't otherwise require a gen_server. 2) Don't use gen_server (or any OTP behaviour)--rewrite your process as a simple server with receive loop. Now that you have direct access to the receive loop, you can plug in your 'after' clause with idle timeout and deal with the server going idle there. If you insist on using a gen_server than I can't see the point in not going with (1)--anything else is going to require more server state, more code, and will ultimately be more error prone. -Rick > On Mar 23, 2006, at 4:35 PM, Rick Pettit wrote: > > >On Thu, Mar 23, 2006 at 04:26:22PM -0500, orbitz@REDACTED wrote: > >>I want to do it from all callbacks. Basically I want to make it so if > >>the server is idle for X amount of time, then that means something is > >>wrong and I need to do something to figure out what is wrong. > > > >Then have your callbacks set a gen_server "idle timeout" by way of the > >gen_server timeout mechanism, and service timeouts in handle_info/2. > > > >It is as simple as passing a timeout value in the return value from > >your > >callbacks--Module:init/1, Module:handle_call/3, Module:handle_cast/2, > >etc. > > > >-Rick > > > >>On Mar 23, 2006, at 11:04 AM, Rick Pettit wrote: > >> > >>>On Thu, Mar 23, 2006 at 10:42:04AM -0500, orbitz@REDACTED wrote: > >>>>I am reworking a bit of code into gen_server pattern. In this code > >>>>I > >>>>have a typical loop construct. In this I want to perform an action > >>>>if > >>>>no messages have been receive in a certain amount of time, to do > >>>>this > >>>>I > >>>>simply have after sometimeout ->. Is there any equivalence of this > >>>>in > >>>>gen_server? > >>> > >>>If I understand you correctly, all you need to do is set a timeout > >>>when > >>>returning from one of the gen_server callback functions. > >>> > >>>For example, instead of returning {ok,State} from Module:init/1, > >>>return > >>>{ok,State,TimeoutMs}. You can do this for all the gen_server callback > >>>routines (at least handle_call/handle_cast, etc). > >>> > >>>This effectively sets a timer which will expire if no message is > >>>received > >>>by the gen_server before TimeoutMs has elapsed. If/when the timer > >>>does > >>>expire, you receive a 'timeout' message in Module:handle_info/2. > >>> > >>>-Rick > >>> > >>>P.S. Since gen_server abstracts the server receive loop, I (as a > >>>general rule > >>> of thumb) *never* call receive in a gen_server callback module. > >>> > >>>>It has been suggested that I use a timer and record when the last > >>>>message has come in and when the timer signals check against that > >>>>message. This seems a poor solution. Another suggestions was to > >>>>use > >>>>a > >>>>gen_fsm with a timeout, but this seem a bit much just to get an > >>>>'after' > >>>>mechanism. Any other suggestions? Perhaps my disregard of gen_fsm > >>>>is > >>>>a > >>>>bit hasty? > >>> > >> > > > From orbitz@REDACTED Fri Mar 24 04:24:17 2006 From: orbitz@REDACTED (orbitz@REDACTED) Date: Thu, 23 Mar 2006 22:24:17 -0500 Subject: 'after' in gen_server In-Reply-To: <20060324030111.GA22234@vailsys.com> References: <9a5a1e913c723f08ee3dcb200bb3b520@ezabel.com> <20060323160403.GA26177@vailsys.com> <039a7a019ac116a8836732d025270c7c@ezabel.com> <20060323213521.GA9978@vailsys.com> <20060324030111.GA22234@vailsys.com> Message-ID: <6be8a8c4dea0259dffab16a016e74c70@ezabel.com> The reason I don't like 1 that much is because my code looks something like: handle_cast({irc_connect, Nick}, #state{state=connecting, dict=Dict, irclib=Irclib} = State) -> % Do connect stuff join_channels(Irclib, dict_proc:fetch(join_on_connect, Dict)), {noreply, State#state{nick=Nick, state=idle}}; handle_cast({stop, _}, #state{state=connecting} = State) -> {stop, stop, State}; handle_cast({irc_message, {_, "PONG", _}}, #state{state=pong, pong_timeout=Ref} = State) -> {ok, cancel} = timer:cancel(Ref), {noreply, State#state{state=idle, pong_timeout=undefined}}; handle_cast(irc_closed, #state{irclib=Irclib} = State) -> irc_lib:connect(Irclib), {noreply, State#state{state=connecting}}; handle_cast({irc_message, {_, "PING", [Server]}}, #state{irclib=Irclib} = State) -> irc_lib:pong(Irclib, Server), {noreply, State}; handle_cast({irc_message, {_, "KICK", [Channel, Nick, _]}}, #state{irclib=Irclib, nick=Nick} = State) -> irc_lib:join(Irclib, Channel), {noreply, State}; So it gets ugly to return a timeout in all of these places. On Mar 23, 2006, at 10:01 PM, Rick Pettit wrote: > On Thu, Mar 23, 2006 at 08:59:45PM -0500, orbitz@REDACTED wrote: >> The only problem I have with that is it sounds very error prone. What >> if I forget a timeout? The more callbacks i implement the uglier >> it'll >> get, won't it? > > The way I see it you have two options: > > 1) Use the gen_server behaviour and its timeout mechanism--this > involves > returning an additional timeout parameter from: > > Module:init/1 > Module:handle_call/3 > Module:handle_cast/2 > Module:handle_info/2 > > You will service the idle timeout in Module:handle_info/2. This > is pretty > simple, IMO, but I agree it is overkill if you don't otherwise > require a > gen_server. > > 2) Don't use gen_server (or any OTP behaviour)--rewrite your process > as > a simple server with receive loop. Now that you have direct > access to > the receive loop, you can plug in your 'after' clause with idle > timeout > and deal with the server going idle there. > > If you insist on using a gen_server than I can't see the point in not > going > with (1)--anything else is going to require more server state, more > code, and > will ultimately be more error prone. > > -Rick > >> On Mar 23, 2006, at 4:35 PM, Rick Pettit wrote: >> >>> On Thu, Mar 23, 2006 at 04:26:22PM -0500, orbitz@REDACTED wrote: >>>> I want to do it from all callbacks. Basically I want to make it so >>>> if >>>> the server is idle for X amount of time, then that means something >>>> is >>>> wrong and I need to do something to figure out what is wrong. >>> >>> Then have your callbacks set a gen_server "idle timeout" by way of >>> the >>> gen_server timeout mechanism, and service timeouts in handle_info/2. >>> >>> It is as simple as passing a timeout value in the return value from >>> your >>> callbacks--Module:init/1, Module:handle_call/3, Module:handle_cast/2, >>> etc. >>> >>> -Rick >>> >>>> On Mar 23, 2006, at 11:04 AM, Rick Pettit wrote: >>>> >>>>> On Thu, Mar 23, 2006 at 10:42:04AM -0500, orbitz@REDACTED wrote: >>>>>> I am reworking a bit of code into gen_server pattern. In this >>>>>> code >>>>>> I >>>>>> have a typical loop construct. In this I want to perform an >>>>>> action >>>>>> if >>>>>> no messages have been receive in a certain amount of time, to do >>>>>> this >>>>>> I >>>>>> simply have after sometimeout ->. Is there any equivalence of >>>>>> this >>>>>> in >>>>>> gen_server? >>>>> >>>>> If I understand you correctly, all you need to do is set a timeout >>>>> when >>>>> returning from one of the gen_server callback functions. >>>>> >>>>> For example, instead of returning {ok,State} from Module:init/1, >>>>> return >>>>> {ok,State,TimeoutMs}. You can do this for all the gen_server >>>>> callback >>>>> routines (at least handle_call/handle_cast, etc). >>>>> >>>>> This effectively sets a timer which will expire if no message is >>>>> received >>>>> by the gen_server before TimeoutMs has elapsed. If/when the timer >>>>> does >>>>> expire, you receive a 'timeout' message in Module:handle_info/2. >>>>> >>>>> -Rick >>>>> >>>>> P.S. Since gen_server abstracts the server receive loop, I (as a >>>>> general rule >>>>> of thumb) *never* call receive in a gen_server callback module. >>>>> >>>>>> It has been suggested that I use a timer and record when the last >>>>>> message has come in and when the timer signals check against that >>>>>> message. This seems a poor solution. Another suggestions was to >>>>>> use >>>>>> a >>>>>> gen_fsm with a timeout, but this seem a bit much just to get an >>>>>> 'after' >>>>>> mechanism. Any other suggestions? Perhaps my disregard of gen_fsm >>>>>> is >>>>>> a >>>>>> bit hasty? >>>>> >>>> >>> >> > From ok@REDACTED Fri Mar 24 06:07:07 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Fri, 24 Mar 2006 17:07:07 +1200 (NZST) Subject: Erlang/OTP R10B-10 has been released Message-ID: <200603240507.k2O577hg440619@atlas.otago.ac.nz> David Hopwood replied to my observations about conditional compilation. I always read anything of his that I see with interest and respect, so it was a little alarming to see that we didn't appear to agree. My experience (mainly in C, but not specific to C) is that this is not sufficient in practice. For example, in a recent embedded system, I have: #define ENABLE_CONTROL 1 #define ENABLE_HEATER 0 #define ENABLE_HEATER_ALARM 0 #define ENABLE_UV 1 #define ENABLE_ANTISTATIC 1 and so on, for ~89 features. This does not mean that there are 2^89 combinations of features that need to be tested. The final program is going to have *all* remaining features enabled, after any that don't work or are redundant have been stripped out. A test with some features disabled is only intended to be a partial test. Fortunately, he and I are not talking about exactly the same thing. I was talking about feature tests IN CODE AS DELIVERED so that it is entirely possible that no two customers will be running the same code. The particular example program I was talking about has now been reduced to a possible 100,000 variants, and we have determined that most of them do NOT in fact work. (That's because just a few of the features interact badly.) David Hopwood is talking about temporarily snipping stuff out DURING DEVELOPMENT AND TESTING. There are two main reasons to disable a feature: - because it doesn't work. Sometimes this is because the associated hardware doesn't work yet, and sometimes the code doesn't compile or has known bugs that would interfere with testing the rest of the system. - because we want to replace it with a mock/stub implementation for testing on a development system rather than on the actual hardware. In some cases the code would not compile on the development system because a needed library is not implemented there. Note that as well as stubbing out the code that implements it, disabling a feature sometimes needs to change code that would use that feature in other modules. While I like the general idea of feature pseudo-functions, I think that to be useful as a tool for testing parts of programs during development (which is the main thing I use conditional compilation for), there must be support for excluding code that doesn't compile. There's a big difference here between C and Erlang. In C, the presence or absence of a declaration in one place can affect whether or not a later function can be compiled or not. As long as you DON'T use the preprocessor, that's not true in Erlang. In delivery mode, you want missing functions to be errors. But in development mode, it may be enough if the compiler *warns* about functions that are used or exported but not defined and automatically plants stubs that raise an exception if called. Taking the list of reasons in turn: * because it doesn't work That's a reason for not *calling* code, but not for not *compiling* it. * because the hardware doesn't work yet Agreed that you don't want to call code that uses hardware that doesn't work yet, but it's not clear to me why you wouldn't want to *compile* it. In fact, it might be *essential* to compile it to make sure that it doesn't interfere with something else. * because the code doesn't compile We are not talking about the desirability of #if in C, but in Erlang. It's much harder to write code that doesn't compile. If it doesn't compile, I don't want it processed until it *does*, and that means that I *don't* want it controlled by an "ENABLE" switch that might accidentally be set. * because the code has known bugs This is a reason not to *call* the code, not a reason not to *compile* it. You want to select test cases using some kind of rule-based thing that leaves out things that test bugs you know about until you are ready to test that those bugs have gone. * because we want to replace it with a mock/stub implementation This is the classic multiple implementations of a single interface issue. This is where you want child modules and a configuration management system that says "in this configuration use this child, in that configuration use that one." In effect, the preprocessor does this kind of stuff *BELOW* the language level using a very clunky and limited language. I'm saying that it should be done *ABOVE* the language level using some reasonable rule-based language. (Datalog?) * because the code would not compile on one system because a needed library isn't there. So we see that - test case selection needs to be informed by which tests exercise what, so that we don't (deliberately) *run* code that we don't intend to, but that doesn't have to mean we don't *compile* it - installing the software in different environments may require different implementations for some functions &c, which is very little different from the features for installation problem. And this can be done by child modules (which we don't have, but could do with) and some kind of declarative configuration language. From ulf@REDACTED Fri Mar 24 07:04:23 2006 From: ulf@REDACTED (Ulf Wiger) Date: Fri, 24 Mar 2006 07:04:23 +0100 Subject: The Computer Language Shootout In-Reply-To: <1143149178.3639.5.camel@tiger> References: <1143149178.3639.5.camel@tiger> Message-ID: Den 2006-03-23 22:26:18 skrev Kenneth Johansson : > My version by the way was dead last :( a wooping 99 times slower than > the fastest entry. > > > http://shootout.alioth.debian.org/gp4/benchmark.php?test=knucleotide&lang=all It's a lot better now. (: Roughly a 10x improvement. There are several optimization flags available for HiPE. What's the outcome when those are applied? I noticed that the gcc code was compiled with '-O3', and the ocaml entry with '-noassert -unsafe -ccopt O3'. Not that I know what all that means, but it sure sounds like they are squeezing that little extra umph out of their programs. /Ulf W -- Ulf Wiger From ulf@REDACTED Fri Mar 24 07:08:12 2006 From: ulf@REDACTED (Ulf Wiger) Date: Fri, 24 Mar 2006 07:08:12 +0100 Subject: 'after' in gen_server In-Reply-To: <6be8a8c4dea0259dffab16a016e74c70@ezabel.com> References: <9a5a1e913c723f08ee3dcb200bb3b520@ezabel.com> <20060323160403.GA26177@vailsys.com> <039a7a019ac116a8836732d025270c7c@ezabel.com> <20060323213521.GA9978@vailsys.com> <20060324030111.GA22234@vailsys.com> <6be8a8c4dea0259dffab16a016e74c70@ezabel.com> Message-ID: Den 2006-03-24 04:24:17 skrev : > The reason I don't like 1 that much is because my code looks something > like: > handle_cast({irc_connect, Nick}, #state{state=connecting, dict=Dict, > irclib=Irclib} = State) -> > % Do connect stuff > join_channels(Irclib, dict_proc:fetch(join_on_connect, Dict)), > {noreply, State#state{nick=Nick, state=idle}}; > handle_cast({stop, _}, #state{state=connecting} = State) -> > {stop, stop, State}; ... > handle_cast({irc_message, {_, "KICK", [Channel, Nick, _]}}, > #state{irclib=Irclib, nick=Nick} = State) -> > irc_lib:join(Irclib, Channel), > {noreply, State}; > So it gets ugly to return a timeout in all of these places. So wrap the return value in e.g. a noreply/1 function: noreply(State) -> {noreply, State, _Timeout = 10000}. Then your code becomes: handle_cast({irc_connect, Nick}, #state{state=connecting, dict=Dict, irclib=Irclib} = State) -> % Do connect stuff join_channels(Irclib, dict_proc:fetch(join_on_connect, Dict)), noreply(State#state{nick=Nick, state=idle}); handle_cast({stop, _}, #state{state=connecting} = State) -> {stop, stop, State}; handle_cast({irc_message, {_, "PONG", _}}, #state{state=pong, pong_timeout=Ref} = State) -> {ok, cancel} = timer:cancel(Ref), noreply(State#state{state=idle, pong_timeout=undefined}); handle_cast(irc_closed, #state{irclib=Irclib} = State) -> irc_lib:connect(Irclib), noreply(State#state{state=connecting}); handle_cast({irc_message, {_, "PING", [Server]}}, #state{irclib=Irclib} = State) -> irc_lib:pong(Irclib, Server), noreply(State); handle_cast({irc_message, {_, "KICK", [Channel, Nick, _]}}, #state{irclib=Irclib, nick=Nick} = State) -> irc_lib:join(Irclib, Channel), noreply(State); BR, Ulf W -- Ulf Wiger From tzheng@REDACTED Thu Mar 23 19:54:00 2006 From: tzheng@REDACTED (Tony Zheng) Date: Thu, 23 Mar 2006 10:54:00 -0800 Subject: About Erlang system nodes In-Reply-To: References: <1134755532.5525.42.camel@home> <43A67622.9070000@hyber.org> <1139851452.29242.16.camel@home> <1139868771.29242.38.camel@home> <1139952724.1215.7.camel@gateway> Message-ID: <1143140040.25413.16.camel@gateway> Hi Chandru Are there any encrypted mechanisms when Mnesia replicate tables on different Erlang nodes? We will put Erlang nodes in different locations on internet, we want to know if it is secure for Mnesia to replicate tables on internet. Thanks. tony On Tue, 2006-02-14 at 17:26, chandru wrote: > Hi, > > You have to start the two nodes with the same cookie. When you start > up an erlang node, it looks for a file called .erlang.cookie in the > current directory and then the home directory. If it can't find > either, it creates a $HOME/.erlang.cookie with some random value in > it. You can create your own .erlang.cookie file with the same cookie > in it and then try starting the nodes. > > Before trying to create the schema, make sure that the command > > net_adm:ping(remote_node@REDACTED). > > returns pong. > > For security, the .erlang.cookie file should only be readable by the > user in whose context you are running the node. > > cheers > Chandru From sgsfak@REDACTED Fri Mar 24 09:14:09 2006 From: sgsfak@REDACTED (Stelianos G. Sfakianakis) Date: Fri, 24 Mar 2006 10:14:09 +0200 Subject: Scalability in widnows Message-ID: Hi all, Erlang is well known for its ability to scale, for example as shown here: http://www.sics.se/~joe/apachevsyaws.html I believe that much of this scalability comes from the underlying operating system and its event APIs (e.g. kqueue, poll etc) My question is what happens in windows platforms. Does erlang manage to excibit the same performance? Does it use some advanced windows API or the "good" old select? And finally do you have any real (production) Erlang application in windows and real world experience about its performace? Thank you very much for any responses! Stelios From ft@REDACTED Fri Mar 24 10:15:48 2006 From: ft@REDACTED (Fredrik Thulin) Date: Fri, 24 Mar 2006 10:15:48 +0100 Subject: About Erlang system nodes In-Reply-To: <1143140040.25413.16.camel@gateway> References: <1143140040.25413.16.camel@gateway> Message-ID: <200603241015.48147.ft@it.su.se> On Thursday 23 March 2006 19:54, Tony Zheng wrote: > Hi Chandru > > Are there any encrypted mechanisms when Mnesia replicate tables on > different Erlang nodes? We will put Erlang nodes in different > locations on internet, we want to know if it is secure for Mnesia to > replicate tables on internet. Use -proto_dist ssl, start your 'erl' with parameters like these : erl -name foo -proto_dist inet_ssl \ -ssl_dist_opt client_certfile /path/to/cert.comb \ -ssl_dist_opt server_certfile /path/to/cert.comb \ -ssl_dist_opt verify 2 cert.comb is a file containing a .pem file plus a .key-file, $ cat cert.comb Certificate: Data: Version: 3 (0x2) ... -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- ... -----END RSA PRIVATE KEY----- $ /Fredrik From chandrashekhar.mullaparthi@REDACTED Fri Mar 24 10:32:08 2006 From: chandrashekhar.mullaparthi@REDACTED (chandru) Date: Fri, 24 Mar 2006 09:32:08 +0000 Subject: About Erlang system nodes In-Reply-To: <1143140040.25413.16.camel@gateway> References: <43A67622.9070000@hyber.org> <1139851452.29242.16.camel@home> <1139868771.29242.38.camel@home> <1139952724.1215.7.camel@gateway> <1143140040.25413.16.camel@gateway> Message-ID: On 23/03/06, Tony Zheng wrote: > Hi Chandru > > Are there any encrypted mechanisms when Mnesia replicate tables on > different Erlang nodes? We will put Erlang nodes in different locations > on internet, we want to know if it is secure for Mnesia to replicate > tables on internet. > Thanks. You can run distributed erlang over SSL. That will encrypt all traffic between the nodes. See http://www.erlang.org/doc/doc-5.4.13/lib/ssl-3.0.11/doc/html/usersguide_frame.html for more info on how to do this. cheers Chandru From damir@REDACTED Fri Mar 24 11:01:16 2006 From: damir@REDACTED (Damir Horvat) Date: Fri, 24 Mar 2006 11:01:16 +0100 Subject: in-ram lists Message-ID: <20060324100116.GA11006@semp.x-si.org> Hi! I've just started with erlang and I need some help with this scenario. I'd like to have a list to which I can append (push) and from which I can shift elements. I'd like to make sort of a bucket (FIFO pipe). This is simple in languages where I come from (perl, python, C, ...). But how can I do this in erlang? I've read one cannot change the value of a variable. How can I implement this FIFO pipe in erlang? Thanks, Damir From ulf.wiger@REDACTED Fri Mar 24 11:07:07 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Fri, 24 Mar 2006 11:07:07 +0100 Subject: in-ram lists Message-ID: Look at the 'queue' module in stdlib. Regards, Ulf W > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Damir Horvat > Sent: den 24 mars 2006 11:01 > To: erlang-questions@REDACTED > Subject: in-ram lists > > Hi! > > I've just started with erlang and I need some help with this scenario. > > I'd like to have a list to which I can append (push) and from > which I can shift elements. I'd like to make sort of a bucket > (FIFO pipe). > > This is simple in languages where I come from (perl, python, C, ...). > But how can I do this in erlang? I've read one cannot change > the value of a variable. How can I implement this FIFO pipe in erlang? > > Thanks, > Damir > From damir@REDACTED Fri Mar 24 13:24:48 2006 From: damir@REDACTED (Damir Horvat) Date: Fri, 24 Mar 2006 13:24:48 +0100 Subject: in-ram lists In-Reply-To: References: Message-ID: <20060324122447.GA12165@semp.x-si.org> On Fri, Mar 24, 2006 at 11:07:07AM +0100, Ulf Wiger (AL/EAB) wrote: > > Look at the 'queue' module in stdlib. Ok, found it. Tryed it. Still have questions. Q1 = queue:new(). queue:in("item1", Q1). queue:in("item2", Q1). Now Q1 has only "item2". I still don't understand, how can I make a FIFO buffer if I can't append items to the queue to which I can refer with the same variable? Damir From vlad.xx.dumitrescu@REDACTED Fri Mar 24 13:32:07 2006 From: vlad.xx.dumitrescu@REDACTED (Vlad Dumitrescu XX (LN/EAB)) Date: Fri, 24 Mar 2006 13:32:07 +0100 Subject: in-ram lists Message-ID: <11498CB7D3FCB54897058DE63BE3897C0167745E@esealmw105.eemea.ericsson.se> Hi, > Q1 = queue:new(). > queue:in("item1", Q1). > queue:in("item2", Q1). Use instead: Q1 = queue:new(). Q2 = queue:in("item1", Q1). Q3 = queue:in("item2", Q2). Since variables can't be assigned to, the functions return the new queue (with the appended value). Regards, Vlad From rherilier@REDACTED Fri Mar 24 13:48:06 2006 From: rherilier@REDACTED (=?ISO-8859-1?Q?R=E9mi_H=E9rilier?=) Date: Fri, 24 Mar 2006 13:48:06 +0100 Subject: in-ram lists In-Reply-To: <20060324122447.GA12165@semp.x-si.org> References: <20060324122447.GA12165@semp.x-si.org> Message-ID: <4423EA86.20403@yahoo.fr> Damir Horvat wrote: > On Fri, Mar 24, 2006 at 11:07:07AM +0100, Ulf Wiger (AL/EAB) wrote: > > >> Look at the 'queue' module in stdlib. >> > > Ok, found it. Tryed it. Still have questions. > > Q1 = queue:new(). > queue:in("item1", Q1). > queue:in("item2", Q1). > > Now Q1 has only "item2". I still don't understand, how can I make a FIFO > buffer if I can't append items to the queue to which I can refer with > the same variable? > > Damir Don't forget to get the value returned by queue:in(Item,Queue). 1> Q1 = queue:new(). {[],[]} 2> Q2 = queue:in("item1", Q1). {["item1"],[]} 3> Q3 = queue:in("item2", Q2). {["item2"],["item1"]} 4> R?mi ___________________________________________________________________________ Nouveau : t?l?phonez moins cher avec Yahoo! Messenger ! D?couvez les tarifs exceptionnels pour appeler la France et l'international. T?l?chargez sur http://fr.messenger.yahoo.com From damir@REDACTED Fri Mar 24 14:26:56 2006 From: damir@REDACTED (Damir Horvat) Date: Fri, 24 Mar 2006 14:26:56 +0100 Subject: in-ram lists In-Reply-To: <4423EA86.20403@yahoo.fr> References: <20060324122447.GA12165@semp.x-si.org> <4423EA86.20403@yahoo.fr> Message-ID: <20060324132656.GA12516@semp.x-si.org> On Fri, Mar 24, 2006 at 01:48:06PM +0100, R?mi H?rilier wrote: > Don't forget to get the value returned by queue:in(Item,Queue). > > 1> Q1 = queue:new(). > {[],[]} > 2> Q2 = queue:in("item1", Q1). > {["item1"],[]} > 3> Q3 = queue:in("item2", Q2). > {["item2"],["item1"]} > 4> Ok, I get this. But what bothers me is, how to do this in functional way? I'd like to have this queue accessible in ram. I'd like to have two functions (push, shift) which adds new element to the queue and takes one off. I don't want to worry about naming variables to capture return values. For example: push (element, queue) # puts element on the tail of the queue shift (queue) # gets first element from the queue. That's all I need. Is this doable in erlang? From ulf.wiger@REDACTED Fri Mar 24 14:50:32 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Fri, 24 Mar 2006 14:50:32 +0100 Subject: The Computer Language Shootout Message-ID: Ulf Wiger wrote: > > I noticed that the gcc code was compiled with '-O3', > and the ocaml entry with '-noassert -unsafe -ccopt O3'. > Not that I know what all that means, but it sure sounds > like they are squeezing that little extra umph out of > their programs. So I did do some eprof profiling: 2> eprof:total_analyse(). FUNCTION CALLS TIME knucleotide:gen_freq/5 349968 34 % knucleotide:update_counter/3 349961 30 % ets:update_counter/3 349961 25 % ets:insert/2 104033 8 % knucleotide:to_upper_no_nl/2 51668 1 % ets:db_delete/1 15 1 % io:request/2 1699 0 % ... and so on. Adjusting the benchmark slightly so that it reads the data from file instead (basically two entry points main() -> Seq = dna_seq(stdin), calc(Seq), halt(0). from_file(F) -> {ok, Fd} = file:open(F, [read]), Seq = dna_seq(Fd), file:close(Fd), calc(Seq). And then changing dna_seq() to dna_seq(Fd) -> seek_three(Fd), dna_seq(Fd, []). dna_seq(Fd, Seq) -> case io:get_line(Fd,'') of eof -> list_to_binary(lists:reverse(Seq)); Line -> Uline = to_upper_no_nl(Line), dna_seq(Fd, [Uline|Seq]) end. and so on, mainly to make it easier to measure... I also removed the io:fwrite() calls and simply used lists:map/2 to collect the results. Compiling just gen_freq/5 to native gave very little (time went down from 1.12 sec to 1.16 sec (ca 3%), but compiling both gen_freq/5 and update_counter/3 gave significant speedup. Time now went down to 0.66 sec. (Commenting out the calls to gen_freq/5 left about 100 msec, which is probably not worth trying to optimise.) Comparing the different compilation options: normal: 1.22 sec native: 0.64 sec native+o3: 0.64 sec selective: 0.66 sec (gen_freq/5 and update_counter/1) Putting back all printouts, I can't see any major difference between non-native and native. This is quite interesting, as the total time reported is ca 1.48 secs. There *should* be a noticeable difference. Final experiment: $> cp $OTP_ROOT/lib/stdlib-1.13.10/src/lists.erl . $> cp $OTP_ROOT/lib/stdlib-1.13.10/src/io.erl . $> cp $OTP_ROOT/lib/stdlib-1.13.10/src/io_lib* . $> ls -1 *.erl io.erl io_lib.erl io_lib_format.erl io_lib_fread.erl io_lib_pretty.erl knucleotide.erl lists.erl $> erlc -W +native *.erl Rerunning again, I get 1.04 secs - a 30% speedup. What's likely to be causing problems are the transitions between native and non-native code, since many of the shootout benchmarks are I/O heavy. BR, Ulf W From orbitz@REDACTED Fri Mar 24 15:11:43 2006 From: orbitz@REDACTED (orbitz@REDACTED) Date: Fri, 24 Mar 2006 09:11:43 -0500 Subject: in-ram lists In-Reply-To: <20060324132656.GA12516@semp.x-si.org> References: <20060324122447.GA12165@semp.x-si.org> <4423EA86.20403@yahoo.fr> <20060324132656.GA12516@semp.x-si.org> Message-ID: You can always write a process to handle this stuff. If you "don't want to worry about naming variables to capture return values" then perhaps you are using the wrong language paradigm. On Mar 24, 2006, at 8:26 AM, Damir Horvat wrote: > On Fri, Mar 24, 2006 at 01:48:06PM +0100, R?mi H?rilier wrote: > >> Don't forget to get the value returned by queue:in(Item,Queue). >> >> 1> Q1 = queue:new(). >> {[],[]} >> 2> Q2 = queue:in("item1", Q1). >> {["item1"],[]} >> 3> Q3 = queue:in("item2", Q2). >> {["item2"],["item1"]} >> 4> > > Ok, I get this. But what bothers me is, how to do this in functional > way? > > I'd like to have this queue accessible in ram. I'd like to have two > functions (push, shift) which adds new element to the queue and takes > one off. I don't want to worry about naming variables to capture return > values. > > For example: > > push (element, queue) # puts element on the tail of the queue > shift (queue) # gets first element from the queue. > > That's all I need. Is this doable in erlang? > From yani.dzhurov@REDACTED Fri Mar 24 15:56:39 2006 From: yani.dzhurov@REDACTED (Yani Dzhurov) Date: Fri, 24 Mar 2006 16:56:39 +0200 Subject: exclude function calls Message-ID: <00831FD3A5FC9548A855C25B42DBC7BD1C64@server2.dobrosoft.local> Hi guys, Is it possible to exclude some particular function calls in function definition while compilation. Here it is what I mean: fun()-> foo(), bar(), baz(). For example to exclude calling bar() /during compilation/, but without removing the code:.I want to be called sometimes, and sometimes not. I've tried to do it with macros fun()-> foo(), -ifdef(some_macro). bar(), -endif. baz(). This way it would be pretty easy to modify, but it's complaining for a syntax error. Am I somewhere wrong? I know I could use: -ifdef(some_macro). fun()-> foo(), baz(). -else. fun()-> foo(), bar(), baz(). -endif. But this way it would double all the source code, since I need to use it in almost all function, and would make it pretty unreadable. I would appreciate any help and ideas how to implement it. Thanks, Yani -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3326 bytes Desc: not available URL: From yani.dzhurov@REDACTED Fri Mar 24 16:07:56 2006 From: yani.dzhurov@REDACTED (Yani Dzhurov) Date: Fri, 24 Mar 2006 17:07:56 +0200 Subject: exclude function calls Message-ID: <016401c64f54$bb25c660$1500a8c0@name3d6d1f4b1d> Hi guys, Is it possible to exclude some particular function calls in function definition while compilation. Here it is what I mean: fun()-> foo(), bar(), baz(). For example to exclude calling bar() /during compilation/, but without removing the code..I want to be called sometimes, and sometimes not. I've tried to do it with macros fun()-> foo(), -ifdef(some_macro). bar(), -endif. baz(). This way it would be pretty easy to modify, but it's complaining for a syntax error. Am I somewhere wrong? I know I could use: -ifdef(some_macro). fun()-> foo(), baz(). -else. fun()-> foo(), bar(), baz(). -endif. But this way it would double all the source code, since I need to use it in almost all function, and would make it pretty unreadable. I would appreciate any help and ideas how to implement it. Thanks, Yani -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3326 bytes Desc: not available URL: From ft@REDACTED Fri Mar 24 16:19:23 2006 From: ft@REDACTED (Fredrik Thulin) Date: Fri, 24 Mar 2006 16:19:23 +0100 Subject: exclude function calls In-Reply-To: <00831FD3A5FC9548A855C25B42DBC7BD1C64@server2.dobrosoft.local> References: <00831FD3A5FC9548A855C25B42DBC7BD1C64@server2.dobrosoft.local> Message-ID: <200603241619.23648.ft@it.su.se> On Friday 24 March 2006 15:56, Yani Dzhurov wrote: > Hi guys, > > > > Is it possible to exclude some particular function calls in function > definition while compilation. I think you should avoid compile-time determinations like these as much as possible. Use run-time checks to see if you should do A or B instead. With compile-time decisions you never really know what the compiled code will do if you for example enable or disable something in a configuration file. Was the code compiled to include functionality X, which could therefor be enabled, or not? Untested, but here is a way to do it at compile time : -define(DOBAR, true). myfun() -> foo(), ?DOBAR andalso bar(), baz(). or, with a function (want_to_do_bar()) that determines if bar() should be called or not : calling_function() -> DoBar = want_to_do_bar(), myfun(DoBar). myfun(DoBar) -> foo(), DoBaz andalso bar(), baz(). You get the idea. /Fredrik From kostis@REDACTED Fri Mar 24 17:41:55 2006 From: kostis@REDACTED (Kostis Sagonas) Date: Fri, 24 Mar 2006 17:41:55 +0100 (MET) Subject: exclude function calls In-Reply-To: Mail from '"Yani Dzhurov" ' dated: Fri, 24 Mar 2006 17:07:56 +0200 Message-ID: <200603241641.k2OGftZn004359@spikklubban.it.uu.se> Some things are definitely a matter of taste, but I would use something along the following lines: ------------------------------------------------------------- fun() -> foo(), bar(), baz(). -ifdef(some_macro). bar() -> .... -else. bar() -> ok. -endif ------------------------------------------------------------- and then I would make sure I compile with +inline. If bar/0 is a remote call, then I would do something along the lines of ------------------------------------------------------------- -ifdef(some_macro). -define(BAR, mod:bar()). -else. -define(BAR, ok). -endif. fun()-> foo(), ?BAR, baz(). ------------------------------------------------------------- Hope this helps, Kostis From ryanobjc@REDACTED Fri Mar 24 20:06:41 2006 From: ryanobjc@REDACTED (Ryan Rawson) Date: Fri, 24 Mar 2006 11:06:41 -0800 Subject: in-ram lists In-Reply-To: References: <20060324122447.GA12165@semp.x-si.org> <4423EA86.20403@yahoo.fr> <20060324132656.GA12516@semp.x-si.org> Message-ID: <78568af10603241106x32645a70ka46aa8f59e8a6d1c@mail.gmail.com> Recursion solves all your problems, along with processes. Eg: queue_loop( Q ) -> receive { push, Datum } -> queue_loop( queue:in( Q, Datum ) ) ; { pop, From } -> {Datum, NewQ } = queue:out( Q ) , From ! Datum , queue_loop( NewQ ) end. You get around having to name successive queues different names, yet you get the behaviour you want. Since erlang processes are light weight, you can have a process for every queue. The queue ID is now just a process id - this is how the file:open() and io:format() functions work. good luck, -ryan On 3/24/06, orbitz@REDACTED wrote: > You can always write a process to handle this stuff. If you "don't > want to worry about naming variables to capture return values" then > perhaps you are using the wrong language paradigm. > > On Mar 24, 2006, at 8:26 AM, Damir Horvat wrote: > > > On Fri, Mar 24, 2006 at 01:48:06PM +0100, R?mi H?rilier wrote: > > > >> Don't forget to get the value returned by queue:in(Item,Queue). > >> > >> 1> Q1 = queue:new(). > >> {[],[]} > >> 2> Q2 = queue:in("item1", Q1). > >> {["item1"],[]} > >> 3> Q3 = queue:in("item2", Q2). > >> {["item2"],["item1"]} > >> 4> > > > > Ok, I get this. But what bothers me is, how to do this in functional > > way? > > > > I'd like to have this queue accessible in ram. I'd like to have two > > functions (push, shift) which adds new element to the queue and takes > > one off. I don't want to worry about naming variables to capture return > > values. > > > > For example: > > > > push (element, queue) # puts element on the tail of the queue > > shift (queue) # gets first element from the queue. > > > > That's all I need. Is this doable in erlang? > > > > From ken@REDACTED Fri Mar 24 21:14:09 2006 From: ken@REDACTED (Kenneth Johansson) Date: Fri, 24 Mar 2006 21:14:09 +0100 Subject: The Computer Language Shootout In-Reply-To: References: <1143149178.3639.5.camel@tiger> Message-ID: <1143231249.3790.5.camel@tiger> On Fri, 2006-03-24 at 07:04 +0100, Ulf Wiger wrote: > Den 2006-03-23 22:26:18 skrev Kenneth Johansson : > > > My version by the way was dead last :( a wooping 99 times slower than > > the fastest entry. > > > > > > http://shootout.alioth.debian.org/gp4/benchmark.php?test=knucleotide?=all > > It's a lot better now. (: > > Roughly a 10x improvement. Well closer to 20x but now both versions are included and there are almost no difference ??? must have been some strange thing happening before with the measurement or something. Actually the version using dict are faster but only by 100ms. From serge@REDACTED Fri Mar 24 23:06:59 2006 From: serge@REDACTED (Serge Aleynikov) Date: Fri, 24 Mar 2006 17:06:59 -0500 Subject: os_mon & alarm_handler in R10B-10 Message-ID: <44246D83.2000402@hq.idt.net> Hi, I've been experimenting with the reworked os_mon in R10B-10, and encountered the following issue. The documentation encourages to replace the default alarm handler with something more sophisticated. For that reason I created a custom handler - lama_alarm_h (LAMA app in jungerl), which uses gen_event:swap_sup_handler/3. I initiate that handler prior to starting OS_MON, and then start OS_MON. In the latest release R10B-10, OS_MON calls alarm_handler:get_alarms/0 upon startup. This causes the 'alarm_handler' event manager issue a call in the alarm_handler.erl module. However, since that handler was replaced by a custom alarm handler, the gen_event's call fails with {error, bad_module}. gen_event always dispatches a call/3 to a specific handler module passed as a parameter, e.g.: -----[alarm_handler.erl (line: 60)]----- get_alarms() -> gen_event:call(alarm_handler, alarm_handler, get_alarms). ---------------------------------------- Yet, if the alarm_handler handler was swapped by another module, the gen_event:call will report an error, therefore crashing OS_MON. One way to resolve this problem would be to introduce another exported function in gen_event: gen_event:call(EventMgrRef, Request) -> Result Can the OTP team suggest some other workaround? Serge -- Serge Aleynikov R&D Telecom, IDT Corp. Tel: (973) 438-3436 Fax: (973) 438-1464 serge@REDACTED From serge@REDACTED Sat Mar 25 02:23:26 2006 From: serge@REDACTED (Serge Aleynikov) Date: Fri, 24 Mar 2006 20:23:26 -0500 Subject: os_mon & alarm_handler in R10B-10 In-Reply-To: <44246D83.2000402@hq.idt.net> References: <44246D83.2000402@hq.idt.net> Message-ID: <44249B8E.2030105@hq.idt.net> For now I used the following patch to take care of this issue, but I would be curious to hear the opinion of the OTP staff. Regards, Serge --- alarm_handler.erl.orig Fri Mar 24 20:08:18 2006 +++ alarm_handler.erl Fri Mar 24 20:19:15 2006 @@ -58,7 +58,12 @@ %% Returns: [{AlarmId, AlarmDesc}] %%----------------------------------------------------------------- get_alarms() -> - gen_event:call(alarm_handler, alarm_handler, get_alarms). + case gen_event:which_handlers(alarm_handler) of + [M | _] -> + gen_event:call(alarm_handler, M, get_alarms); + [] -> + [] + end. add_alarm_handler(Module) when atom(Module) -> gen_event:add_handler(alarm_handler, Module, []). Serge Aleynikov wrote: > Hi, > > I've been experimenting with the reworked os_mon in R10B-10, and > encountered the following issue. > > The documentation encourages to replace the default alarm handler with > something more sophisticated. For that reason I created a custom > handler - lama_alarm_h (LAMA app in jungerl), which uses > gen_event:swap_sup_handler/3. > > I initiate that handler prior to starting OS_MON, and then start OS_MON. > > In the latest release R10B-10, OS_MON calls alarm_handler:get_alarms/0 > upon startup. > > This causes the 'alarm_handler' event manager issue a call in the > alarm_handler.erl module. However, since that handler was replaced by a > custom alarm handler, the gen_event's call fails with > {error, bad_module}. > > gen_event always dispatches a call/3 to a specific handler module passed > as a parameter, e.g.: > > -----[alarm_handler.erl (line: 60)]----- > get_alarms() -> > gen_event:call(alarm_handler, alarm_handler, get_alarms). > ---------------------------------------- > > Yet, if the alarm_handler handler was swapped by another module, the > gen_event:call will report an error, therefore crashing OS_MON. > > One way to resolve this problem would be to introduce another exported > function in gen_event: > > gen_event:call(EventMgrRef, Request) -> Result > > Can the OTP team suggest some other workaround? > > Serge > -- Serge Aleynikov R&D Telecom, IDT Corp. Tel: (973) 438-3436 Fax: (973) 438-1464 serge@REDACTED From xlcr@REDACTED Sat Mar 25 08:55:47 2006 From: xlcr@REDACTED (Nick Linker) Date: Sat, 25 Mar 2006 13:55:47 +0600 Subject: Newbie questions In-Reply-To: <44246D83.2000402@hq.idt.net> References: <44246D83.2000402@hq.idt.net> Message-ID: <4424F783.6050606@mail.ru> Good day. I was playing with Erlang for calculation over large numbers and discovered some issues: 1. math:log fails with "badarg" for big integers. 2. It is impossible to know how many digits in an integer _efficiently_ (i.e. with O(1) or O(log(n)), where n - number of digits). length(number_to_list(N)) appears to have O(n). 3. Let me define the following function: fib(0) -> 0; fib(1) -> 1; fib(N) -> fib(N-1) + fib(N-2). I have failed to convert this function _without_ put/get to be able to compute even fib(100) within reasonable period of time (I guess I did it wrong so that tail recursion was not here). Is there a way to compute fib(N), N>=100 without side effects? Thank you in advance, Best regards, Linker Nick. From kostis@REDACTED Sat Mar 25 09:47:11 2006 From: kostis@REDACTED (Kostis Sagonas) Date: Sat, 25 Mar 2006 09:47:11 +0100 (MET) Subject: Newbie questions In-Reply-To: Mail from 'Nick Linker ' dated: Sat, 25 Mar 2006 13:55:47 +0600 Message-ID: <200603250847.k2P8lBfH007162@spikklubban.it.uu.se> Nick Linter wrote: > I was playing with Erlang for calculation over large numbers and > discovered some issues: > 1. math:log fails with "badarg" for big integers. An example would have helped us understand better what the issue is. Currently, I get: Eshell V5.5 (abort with ^G) 1> math:log(43466557686938914862637500386755014010958388901725051132915256476112292920052539720295234060457458057800732025086130975998716977051839168242483814062805283311821051327273518050882075662659534523370463746326528). 480.407 > 3. Let me define the following function: > fib(0) -> 0; > fib(1) -> 1; > fib(N) -> fib(N-1) + fib(N-2). > I have failed to convert this function _without_ put/get to be able to > compute even fib(100) within reasonable period of time (I guess I did it > wrong so that tail recursion was not here). Is there a way to compute > fib(N), N>=100 without side effects? Yes. Try: -module(fib). -export([fib/1]). -import(math, [pow/3, sqrt/1]). fib(N) -> trunc((1/sqrt(5)) * (pow(((1+sqrt(5))/2),N) - pow(((1-sqrt(5))/2),N))). Kostis PS. The performance you are experiencing in your version of fib/1 has nothing to do with tail recursion... From paris@REDACTED Sat Mar 25 11:27:57 2006 From: paris@REDACTED (Javier =?iso-8859-1?q?Par=EDs_Fern=E1ndez?=) Date: Sat, 25 Mar 2006 11:27:57 +0100 Subject: Newbie questions In-Reply-To: <4424F783.6050606@mail.ru> References: <44246D83.2000402@hq.idt.net> <4424F783.6050606@mail.ru> Message-ID: <200603251127.58459.paris@dc.fi.udc.es> El S?bado, 25 de Marzo de 2006 08:55, Nick Linker escribi?: Hi, > Good day. > 3. Let me define the following function: > fib(0) -> 0; > fib(1) -> 1; > fib(N) -> fib(N-1) + fib(N-2). > I have failed to convert this function _without_ put/get to be able to > compute even fib(100) within reasonable period of time (I guess I did it > wrong so that tail recursion was not here). Is there a way to compute > fib(N), N>=100 without side effects? fibaux(_N2,N1,N,N) -> N1; fibaux(N2,N1,C,N) -> fibaux(N1,N1+N2,C+1,N). fib(0) -> 0; fib(N) -> fibaux(0,1,0,N). However, as Kostis said, this has more to do with having two recursive calls each time than with it being tail recursion or not. If you try to see how it evaluated, the number of recursive calls expands exponentially. Regards. From invisio22@REDACTED Sat Mar 25 11:35:43 2006 From: invisio22@REDACTED (Eric Shun) Date: Sat, 25 Mar 2006 11:35:43 +0100 Subject: Mnesia request Message-ID: <3f9db9f20603250235q39f4d1d3u@mail.gmail.com> Hello, I'm just starting to learn how to use Mnesia and I can't find how to select every rows in a table, which contain at least one field undefined... How can I do that? -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal@REDACTED Sat Mar 25 13:30:21 2006 From: michal@REDACTED (Michal Slaski) Date: Sat, 25 Mar 2006 12:30:21 +0000 Subject: Mnesia request In-Reply-To: <3f9db9f20603250235q39f4d1d3u@mail.gmail.com> References: <3f9db9f20603250235q39f4d1d3u@mail.gmail.com> Message-ID: <84d062da0603250430y2853d821k@mail.gmail.com> Eric Shun wrote: > I'm just starting to learn how to use Mnesia and I can't find how to select > every rows in a table, which contain at least one field undefined... > How can I do that? Keys = mnesia:dirty_all_keys(tablename), [ Rec || Key <- Keys, Rec <- mnesia:dirty_read(tablename, Key), lists:member(undefined, tuple_to_list(Rec))]. Will return a list of records ("rows") with at least one field == undefined. -- Michal Slaski www.erlang-consulting.com From rprice@REDACTED Sat Mar 25 14:32:24 2006 From: rprice@REDACTED (Roger Price) Date: Sat, 25 Mar 2006 14:32:24 +0100 (CET) Subject: Can pattern variables be globally bound? Message-ID: The following program: -module(test) . % 1 -export([test/1]) . % 2 test (X) -> % 3 case X*10 % 4 of 0 -> a % 5 ; X -> b % 6 end . % 7 compiles with no warnings, and provides the following output: Eshell V5.4.9 (abort with ^G) 1> test:test(0) . a 2> test:test(1) . =ERROR REPORT==== 25-Mar-2006::14:20:45 === Error in process <0.31.0> with exit value: {{case_clause,10},[{test,test,1},{shell,exprs,6},{shell,eval_loop,3}]} My understanding of the error message is that the pattern variable X on line 6 is already bound to the value 1, and therefore no match is possible for value 10. Is this correct? Roger From ryanobjc@REDACTED Sat Mar 25 15:15:18 2006 From: ryanobjc@REDACTED (Ryan Rawson) Date: Sat, 25 Mar 2006 06:15:18 -0800 Subject: Can pattern variables be globally bound? In-Reply-To: References: Message-ID: <78568af10603250615t1d925999u9a7a3fbaf5c54214@mail.gmail.com> You are correct. Essentially when X != 0, you are trying to essentially say "b IFF X*10 == X". Which is generally not true :-) So you get a case clause exception. The subject line is a little misleading, since variables can only exist in the context of a single function "scope" - with lexical scoping rules of course for fun()s. "Global" variables can be accomplished with the process dictionary or ets or mnesia tables. There are other techniques, like using a gen_server to maintain state across requests (using recursion/tail recusion). -ryan On 3/25/06, Roger Price wrote: > The following program: > > -module(test) . % 1 > -export([test/1]) . % 2 > test (X) -> % 3 > case X*10 % 4 > of 0 -> a % 5 > ; X -> b % 6 > end . % 7 > > compiles with no warnings, and provides the following output: > > Eshell V5.4.9 (abort with ^G) > 1> test:test(0) . > a > 2> test:test(1) . > > =ERROR REPORT==== 25-Mar-2006::14:20:45 === > Error in process <0.31.0> with exit value: > {{case_clause,10},[{test,test,1},{shell,exprs,6},{shell,eval_loop,3}]} > > My understanding of the error message is that the pattern variable X on > line 6 is already bound to the value 1, and therefore no match is possible > for value 10. Is this correct? > > Roger > From sgsfak@REDACTED Sat Mar 25 16:02:09 2006 From: sgsfak@REDACTED (Stelianos G. Sfakianakis) Date: Sat, 25 Mar 2006 17:02:09 +0200 Subject: Scalability in windows Message-ID: Hi all, Erlang is well known for its ability to scale, for example as shown here: http://www.sics.se/~joe/apachevsyaws.html I believe that much of this scalability comes from the underlying operating system and its event APIs (e.g. kqueue, poll etc) My question is what happens in windows platforms. Does erlang manage to excibit the same performance? Does it use some advanced windows API or the "good" old select? And finally do you have any real (production) Erlang application in windows and real world experience about its performace? Thank you very much for any responses! Stelios From orbitz@REDACTED Sat Mar 25 20:28:28 2006 From: orbitz@REDACTED (orbitz@REDACTED) Date: Sat, 25 Mar 2006 14:28:28 -0500 Subject: Scalability in windows In-Reply-To: References: Message-ID: Well I don't have any evidence for you but testing it shouldn't be too hard. Run yaws or some other application on Windows and nail it. I am under the impression erlang will fair well. The implementatino should use IOCP which is supposed to handle large numbers of sockets fairy efficiently. On Mar 25, 2006, at 10:02 AM, Stelianos G. Sfakianakis wrote: > Hi all, > > Erlang is well known for its ability to scale, for example as shown > here: http://www.sics.se/~joe/apachevsyaws.html > > I believe that much of this scalability comes from the underlying > operating system and its event APIs (e.g. kqueue, poll etc) My > question is what happens in windows platforms. Does erlang manage to > excibit the same performance? Does it use some advanced windows API or > the "good" old select? And finally do you have any real (production) > Erlang application in windows and real world experience about its > performace? > > Thank you very much for any responses! > > Stelios From thomasl_erlang@REDACTED Sat Mar 25 20:57:36 2006 From: thomasl_erlang@REDACTED (Thomas Lindgren) Date: Sat, 25 Mar 2006 11:57:36 -0800 (PST) Subject: exclude function calls In-Reply-To: <00831FD3A5FC9548A855C25B42DBC7BD1C64@server2.dobrosoft.local> Message-ID: <20060325195736.1458.qmail@web36714.mail.mud.yahoo.com> --- Yani Dzhurov wrote: > Hi guys, > > > > Is it possible to exclude some particular function > calls in function > definition while compilation. How about this: Define this macro somewhere (actually, conditional compilation might be cleaner here :-): -define(maybe(Expr), Expr). %%-define(maybe(Expr), ok). Then wrap the calls that you want to exclude: f() -> foo(), ?maybe(bar()), baz(). The above ?maybe can still yield compilation errors if you export variables from it. (Consider the case when ?maybe(E) expands to 'ok'.) You could also define it this way: -define(maybe(Expr), (fun() -> Expr end)() ). This prevents locally bound variables from escaping the ?maybe, and can in principle be compiled as efficiently as just executing Expr. I'm not sure if the compiler does that optimization, however. Best, Thomas __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From ke.han@REDACTED Sun Mar 26 12:47:52 2006 From: ke.han@REDACTED (ke han) Date: Sun, 26 Mar 2006 18:47:52 +0800 Subject: The Computer Language Shootout In-Reply-To: <1142565775.11970.10.camel@tiger> References: <1142565775.11970.10.camel@tiger> Message-ID: Thanks for helping clean up the code with the erlang community. Its clear you need to have solid experience with any language to get a decent benchmark result. Overall, I'm pleased with the current result on knucleotide. If erlang can't do much better than this without making the code unreadable then thats a fair score. I would like to solicit a little further input from the erlang community that can explain why erlang still performs slower than a few others. The most interesting comparisons I would be interested in are to OCaml and Java. OCaml since it gets grouped with erlang as functional. Java since its a market leader and I have enough experience with Java to better understand the rational. Any takers? thanks, ke han On Mar 17, 2006, at 11:22 AM, Kenneth Johansson wrote: > http://shootout.alioth.debian.org/gp4/benchmark.php? > test=knucleotide&lang=all > > I did an implementation in erlang for the knucleotide. > > And while the code is much shorter than C and fortran it's larger than > ruby and python. But since this is my first real try at erlang I'm > sure > someone here can do significant improvement. > > Also the speed is a problem it's on my computer 8 times slower than > the > python version. > > From jeremie@REDACTED Sun Mar 26 16:01:17 2006 From: jeremie@REDACTED (=?ISO-8859-1?Q?J=E9r=E9mie_Lumbroso?=) Date: Sun, 26 Mar 2006 16:01:17 +0200 Subject: Using TCP sockets with 'receive' instead 'gen_tcp:recv/2' Message-ID: <2b7b425b0603260601n40617a8age954cbc280dcbda5@mail.gmail.com> Hello, To get to know Erlang, and how to use it, I've set myself a small introductory project. So for this first project, I wanted to write a Clue[do] server. But I've encountered a serious problem with the receive ... after ... end construct. I would like to use it to handle the client sockets which were obtained using gen_tcp:accept/1, and for that purpose, have made sure to set the option {active, true} wherever I thought it mattered. But it still does not work?and I have no idea why. This is the code that I use to create the listening socket: gen_tcp:listen(?port, [binary, {nodelay, true}, {packet, 0}, {reuseaddr, true}, {active, false}]) And here is how I accept the client sockets: server_accept(LSock, User_List) -> case gen_tcp:accept(LSock, 1000) of %%%% % Spawning a new thread, linking to it, and returning % a new 'user list'. {ok, Sock} -> io:format("Client connected successfully ...~n"), Pid = spawn_link(?MODULE, relay, [Sock]), Client = #client{pid = Pid, socket = Sock}, [ Client | User_List ]; _Else -> User_List end. And finally, here is the relay function in question (BTW, the documentation says that I can't use the receive to figure out when a socket created with accept is closed, but I was not sure if this was an omission or an actual limitation): relay(Socket) -> io:format("[debug]-> start~n"), %%%% Activate socket (so we can receive through it. inet:setopts(Socket, [binary, {nodelay,true}, {active, true}]), relay(Socket, []). relay(Socket, Buffer) -> receive %%%% % Split the received data in lines, process them % and put the extra data back in buffer before % starting the loop again. {tcp, Socket, Bin} -> Data = binary_to_list(Bin), io:format("[debug]-> ~p~n", [ Data ]), case regexp:split(Buffer ++ Data, "\r\n") of {ok, FieldList} -> New_Buffer = process_lines(FieldList); _NoMatch -> New_Buffer = Buffer ++ Data end, ?MODULE:relay(Socket, New_Buffer); {tcp_closed, Socket} -> io:format("[debug]-> end"), exit(self(), tcp_closed); {server, close} -> gen_tcp:close(Socket), exit(self(), tcp_closed); {server, {data, Data}} -> gen_tcp:send(Socket, Data), ?MODULE:relay(Socket, Buffer) after 5000 -> io:format("[debug]-> wait~n"), ?MODULE:relay(Socket, Buffer) end. In case this is not enough, I've attached the whole file as well. Is there something that I'm doing wrong or is this just how it (doesn't) work? Best Regards, - J?r?mie Lumbroso -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: snippet.erl Type: application/octet-stream Size: 3428 bytes Desc: not available URL: From ulf@REDACTED Sun Mar 26 17:56:27 2006 From: ulf@REDACTED (Ulf Wiger) Date: Sun, 26 Mar 2006 17:56:27 +0200 Subject: The Computer Language Shootout In-Reply-To: References: <1142565775.11970.10.camel@tiger> Message-ID: Den 2006-03-26 12:47:52 skrev ke han : > I would like to solicit a little further input from the erlang community > that can explain why erlang still performs slower than a few others. > The most interesting comparisons I would be interested in are to OCaml > and Java. OCaml since it gets grouped with erlang as functional. Java > since its a market leader and I have enough experience with Java to > better understand the rational. OCaml has very strict static type system. This helps a lot, as no runtime type checks are necessary. Java is also statically typed, and allows for destructive update of data structures. This can help in low-level benchmarks. Furthermore, Erlang beats the socks off of both in the cheap- concurrency benchmark: cpu time N 5,000 10,000 15,000 OCaml 252.56 515.45 757.96 OCaml #2 131.82 263.52 394.79 JDK -server #2 65.74 95.71 194.60 HiPE #2 1.84 3.59 5.18 memory N 5,000 10,000 15,000 JDK -server #2 24,500 27,556 24,500 OCaml 19,072 18,916 19,096 HiPE #2 5,000 4,996 4,996 OCaml #2 4,328 4,332 4,284 (And this is still for small N, as far as Erlang's concerned...) Java is not only slow in concurrency benchmarks. It has a poor concurrency model as well. OCaml relies on the concurrency support provided by the underlying operating system, as far as I remember. For an all-round contender within Erlang's domain, you might look at Concurrent Haskell. I think Haskell beats Erlang in expressiveness, and often in speed, but esp. the support for concurrency and distribution is somewhat less mature than in Erlang. You should find more comparative data here: http://www.cee.hw.ac.uk/~dsg/telecoms/ (at least eventually. Currently, is seems unreachable.) BR, Ulf W -- Ulf Wiger From jeremie@REDACTED Sun Mar 26 19:01:55 2006 From: jeremie@REDACTED (=?ISO-8859-1?Q?J=E9r=E9mie_Lumbroso?=) Date: Sun, 26 Mar 2006 19:01:55 +0200 Subject: Using TCP sockets with 'receive' instead 'gen_tcp:recv/2' In-Reply-To: <2b7b425b0603260601n40617a8age954cbc280dcbda5@mail.gmail.com> References: <2b7b425b0603260601n40617a8age954cbc280dcbda5@mail.gmail.com> Message-ID: <2b7b425b0603260901o269c9dbeo3405c9f84d42c6f@mail.gmail.com> On 3/26/06, J?r?mie Lumbroso wrote: > > This is the code that I use to create the listening socket: > > gen_tcp:listen(?port, [binary, > {nodelay, true}, > {packet, 0}, > {reuseaddr, true}, > {active, false}]) > > BTW, I have indeed tried setting active to true here. And the problem persists. -------------- next part -------------- An HTML attachment was scrubbed... URL: From orbitz@REDACTED Sun Mar 26 19:03:58 2006 From: orbitz@REDACTED (orbitz@REDACTED) Date: Sun, 26 Mar 2006 12:03:58 -0500 Subject: Using TCP sockets with 'receive' instead 'gen_tcp:recv/2' In-Reply-To: <2b7b425b0603260601n40617a8age954cbc280dcbda5@mail.gmail.com> References: <2b7b425b0603260601n40617a8age954cbc280dcbda5@mail.gmail.com> Message-ID: You need to set the sockets controling proces to the new one you spawn. How is the socket supposed to know who to send its messages to after all? On Mar 26, 2006, at 9:01 AM, J?r?mie Lumbroso wrote: > Hello, > > To get to know Erlang, and how to use it, I've set myself a small > introductory project. So for this first project, I wanted to write > a Clue[do] server. But I've encountered a serious problem with the > receive ... after ... end construct. I would like to use it to > handle the client sockets which were obtained using gen_tcp:accept/ > 1, and for that purpose, have made sure to set the option {active, > true} wherever I thought it mattered. But it still does not work? > and I have no idea why. > > This is the code that I use to create the listening socket: > > gen_tcp:listen(?port, [binary, > {nodelay, true}, > {packet, 0}, > {reuseaddr, true}, > {active, false}]) > > > And here is how I accept the client sockets: > > server_accept(LSock, User_List) -> > case gen_tcp:accept(LSock, 1000) of > > %%%% > % Spawning a new thread, linking to it, and returning > % a new 'user list'. > {ok, Sock} -> > io:format("Client connected successfully ...~n"), > Pid = spawn_link(?MODULE, relay, [Sock]), > Client = #client{pid = Pid, socket = Sock}, > [ Client | User_List ]; > > _Else -> > User_List > end. > > And finally, here is the relay function in question (BTW, the > documentation says that I can't use the receive to figure out when > a socket created with accept is closed, but I was not sure if this > was an omission or an actual limitation): > > relay(Socket) -> > io:format("[debug]-> start~n"), > %%%% Activate socket (so we can receive through it. > inet:setopts(Socket, [binary, {nodelay,true}, > {active, true}]), > relay(Socket, []). > > relay(Socket, Buffer) -> > receive > > %%%% > % Split the received data in lines, process them > % and put the extra data back in buffer before > % starting the loop again. > {tcp, Socket, Bin} -> > Data = binary_to_list(Bin), > io:format("[debug]-> ~p~n", [ Data ]), > case regexp:split(Buffer ++ Data, "\r\n") of > {ok, FieldList} -> > New_Buffer = process_lines(FieldList); > > _NoMatch -> > New_Buffer = Buffer ++ Data > end, > ?MODULE:relay(Socket, New_Buffer); > > {tcp_closed, Socket} -> > io:format("[debug]-> end"), > exit(self(), tcp_closed); > > {server, close} -> > gen_tcp:close(Socket), > exit(self(), tcp_closed); > > {server, {data, Data}} -> > gen_tcp:send(Socket, Data), > ?MODULE:relay(Socket, Buffer) > > after > 5000 -> > io:format("[debug]-> wait~n"), > ?MODULE:relay(Socket, Buffer) > end. > > In case this is not enough, I've attached the whole file as well. > > Is there something that I'm doing wrong or is this just how it > (doesn't) work? > > Best Regards, > > - J?r?mie Lumbroso > > > > > > > From jeremie@REDACTED Sun Mar 26 19:10:19 2006 From: jeremie@REDACTED (=?ISO-8859-1?Q?J=E9r=E9mie_Lumbroso?=) Date: Sun, 26 Mar 2006 19:10:19 +0200 Subject: Using TCP sockets with 'receive' instead 'gen_tcp:recv/2' In-Reply-To: <2b7b425b0603260908g14edcf37o4c00bb71b4b6db6b@mail.gmail.com> References: <2b7b425b0603260601n40617a8age954cbc280dcbda5@mail.gmail.com> <2b7b425b0603260908g14edcf37o4c00bb71b4b6db6b@mail.gmail.com> Message-ID: <2b7b425b0603260910j431e4a73m7fcde9a18e1f6c81@mail.gmail.com> On 3/26/06, orbitz@REDACTED wrote: > You need to set the sockets controling proces to the new one you > spawn. How is the socket supposed to know who to send its messages > to after all? > Thank you. I've learned to be amazed by the magics Erlang is capable of, but I guess mind-reading is not one of them. :-) Thanks a lot!! It works as intended now! Best Regards, J?r?mie Lumbroso -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremie@REDACTED Sun Mar 26 19:25:42 2006 From: jeremie@REDACTED (=?ISO-8859-1?Q?J=E9r=E9mie_Lumbroso?=) Date: Sun, 26 Mar 2006 19:25:42 +0200 Subject: 'receive ... after ... end' instead of 'gen_tcp:accept/2' ? Message-ID: <2b7b425b0603260925y6cbffb7dvbbfab51e739b2aba@mail.gmail.com> Hello again, Is there perhaps a way to use the messaging system instead of the acceptcommand, to accept incoming connections on a socket? Regards, J?r?mie Lumbroso -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaslist@REDACTED Sun Mar 26 19:53:46 2006 From: kaslist@REDACTED (kaslist) Date: Sun, 26 Mar 2006 19:53:46 +0200 Subject: Erlang/Windows stabilty - a newbie question. Message-ID: <4426D52A.2050009@gmail.com> Hi, This is my first post to the the list - I've been coding for about twenty-five years... but currently I'm an Erlang newbie :) At the moment, I'm having a bit of a problem with program stability using Erlang and I wondered if anyone here could help. I'm using R10B-10 on WinXP (2GHz/1GbRAM) and I've narrowed the issue to the following code which I've wrapped in a simple module to demonstrate: -module(testtokens). -export([start/0]). start() -> erlang:display(time()), FILENAME = "test.txt", case file:read_file(FILENAME) of {ok, Data} -> TESTSTRING = binary_to_list(Data); {error, Reason} -> TESTSTRING = "" end, string:tokens(TESTSTRING, " "), time(). The code is intended to load a file, convert it to a string and then split the loaded string. To test Erlang under pressure - I made 'test.txt' - 14Mb large. When I run it repeatedly... this is what happens... Erlang (BEAM) emulator version 5.4.13 [threads:0] Eshell V5.4.13 (abort with ^G) 1> c(testtokens). ./testtokens.erl:11: Warning: variable 'Reason' is unused {ok,testtokens} 2> testtokens:start(). {19,13,40} {19,13,50} 3> testtokens:start(). {19,13,55} {19,14,3} 4> testtokens:start(). {19,14,7} {19,14,19} 5> testtokens:start(). {19,14,22} {19,14,50} 6> testtokens:start(). {19,14,53} Abnormal termination Every time it crashes. Thus my question is... am I doing something intrinsically dumb/non-erlang here.... or is it a bug? I know Erlang doesn't pretend to be an efficient sequential processor... but I didn't expect this. If you've got any advice I'd really appreciate it. Thanks, Kyle. From erlang@REDACTED Sun Mar 26 21:05:19 2006 From: erlang@REDACTED (Michael McDaniel) Date: Sun, 26 Mar 2006 11:05:19 -0800 Subject: Erlang/Windows stabilty - a newbie question. In-Reply-To: <4426D52A.2050009@gmail.com> References: <4426D52A.2050009@gmail.com> Message-ID: <20060326190519.GK11888@delora.autosys.us> using R10B-10 on my Linux box (1.6Ghz/512MB) I get Crash dump was written to: erl_crash.dump eheap_alloc: Cannot allocate 912262800 bytes of memory (of type "heap"). I will guess you are having same issue though with a less useful message. It appears to me that the heap is not getting recovered between repeated executions of the program. I thought that binary objects got recovered; someone else perhaps can explain? Anyway, I can run repeatedly from the command line with no problems using the command: erl -s testtokens start -s erlang halt ~Michael On Sun, Mar 26, 2006 at 07:53:46PM +0200, kaslist wrote: > Hi, > > This is my first post to the the list - I've been coding for about > twenty-five years... but currently I'm an Erlang newbie :) > > At the moment, I'm having a bit of a problem with program stability > using Erlang and I wondered if anyone here could help. I'm using > R10B-10 on WinXP (2GHz/1GbRAM) and I've narrowed the issue to the > following code which I've wrapped in a simple module to demonstrate: > > > -module(testtokens). > > -export([start/0]). > > start() -> > erlang:display(time()), > FILENAME = "test.txt", > case file:read_file(FILENAME) of > {ok, Data} -> > TESTSTRING = binary_to_list(Data); > {error, Reason} -> > TESTSTRING = "" > end, > > string:tokens(TESTSTRING, " "), > time(). > > > The code is intended to load a file, convert it to a string and then > split the loaded string. To test Erlang under pressure - I made > 'test.txt' - 14Mb large. When I run it repeatedly... this is what > happens... > > > Erlang (BEAM) emulator version 5.4.13 [threads:0] > > Eshell V5.4.13 (abort with ^G) > 1> c(testtokens). > ./testtokens.erl:11: Warning: variable 'Reason' is unused > {ok,testtokens} > 2> testtokens:start(). > {19,13,40} > {19,13,50} > 3> testtokens:start(). > {19,13,55} > {19,14,3} > 4> testtokens:start(). > {19,14,7} > {19,14,19} > 5> testtokens:start(). > {19,14,22} > {19,14,50} > 6> testtokens:start(). > {19,14,53} > > Abnormal termination > > > Every time it crashes. Thus my question is... am I doing something > intrinsically dumb/non-erlang here.... or is it a bug? I know Erlang > doesn't pretend to be an efficient sequential processor... but I didn't > expect this. > > If you've got any advice I'd really appreciate it. > > Thanks, > > Kyle. > > From ulf@REDACTED Sun Mar 26 21:34:46 2006 From: ulf@REDACTED (Ulf Wiger) Date: Sun, 26 Mar 2006 21:34:46 +0200 Subject: Erlang/Windows stabilty - a newbie question. In-Reply-To: <4426D52A.2050009@gmail.com> References: <4426D52A.2050009@gmail.com> Message-ID: Den 2006-03-26 19:53:46 skrev kaslist : > Hi, > > This is my first post to the the list - I've been coding for about > twenty-five years... but currently I'm an Erlang newbie :) At the > moment, I'm having a bit of a problem with program stability using > Erlang and I wondered if anyone here could help. I'm using R10B-10 on > WinXP (2GHz/1GbRAM) and I've narrowed the issue to the following code > which I've wrapped in a simple module to demonstrate: [...] > Every time it crashes. Thus my question is... am I doing something > intrinsically dumb/non-erlang here.... or is it a bug? I know Erlang > doesn't pretend to be an efficient sequential processor... but I didn't > expect this. I think you've stumbled upon one of Erlang's small quirks, described in chapter 5.14 of the Erlang FAQ: "Normal data in Erlang is put on the process heap, which is garbage collected. Large binaries, on the other hand, are reference counted. This has two interesting consequences. Firstly, binaries don't count towards a process' memory use. Secondly, a lot of memory can be allocated in binaries without causing a process' heap to grow much. If the heap doesn't grow, it's likely that there won't be a garbage collection, which may cause binaries to hang around longer than expected. A strategically-placed call to erlang:garbage_collect() will help." It isn't immediately obvious, as the string:tokens/2 call should generate lots of garbage, which should also take care of the binaries. Still, if you'd throw in a call to erlang:garbage_collect() every once in a while, your system will probably stay up. In a live system, you'd want to consider spawning a process to do the work and then die. Regards, Ulf W -- Ulf Wiger From kaslist@REDACTED Mon Mar 27 01:04:47 2006 From: kaslist@REDACTED (kaslist) Date: Mon, 27 Mar 2006 01:04:47 +0200 Subject: Erlang/Windows stabilty - a newbie question. In-Reply-To: References: <4426D52A.2050009@gmail.com> Message-ID: <44271E0F.8070709@gmail.com> Ulf Wiger wrote: > Still, if you'd throw in a call to > erlang:garbage_collect() every once in a while, your system will > probably stay up. > > In a live system, you'd want to consider spawning a process > to do the work and then die. > > Regards, > Ulf W > --Ulf Wiger > It works! :) Thanks Ulf for this solution and the background information about how Erlang is doing its GCing. As to spawning... I did try that when I was originally looking for a fix to the problem... however that seemed to take the average time for each tokenising function pass from approx. 10secs... to approx. 60-120+ secs... so I guessed this was a bad idea! I'll retry the spawning approach in the light of everything suggested. Anyway, thanks again... and also to Michael for his feedback too, Kyle. From james.hague@REDACTED Mon Mar 27 01:27:33 2006 From: james.hague@REDACTED (James Hague) Date: Sun, 26 Mar 2006 17:27:33 -0600 Subject: The Computer Language Shootout In-Reply-To: References: <1142565775.11970.10.camel@tiger> Message-ID: On 3/26/06, Ulf Wiger wrote: > > OCaml has very strict static type system. This helps a lot, as no runtime > type checks are necessary. Additionally, OCaml allows you to write code in an imperative style, so if you want to destructively update variables and arrays and lean heavily in the direction of C, then you can do so. I've noticed this is common in OCaml benchmarks. From ok@REDACTED Mon Mar 27 05:26:06 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Mon, 27 Mar 2006 15:26:06 +1200 (NZST) Subject: Newbie questions Message-ID: <200603270326.k2R3Q601420348@atlas.otago.ac.nz> Nick Linker wrote: 1. math:log fails with "badarg" for big integers. math:log/1 was designed to work on floating-point arguments; I suspect that it's the integer->float conversion that is failing here. 2. It is impossible to know how many digits in an integer _efficiently_ (i.e. with O(1) or O(log(n)), where n - number of digits). length(number_to_list(N)) appears to have O(n). Anything else you can do with a number has to be at least O(n), so *in the context of a real use* of bignum arithmetic, why does measuring the size have to be O(lg n) rather than O(n)? 3. Let me define the following function: fib(0) -> 0; fib(1) -> 1; fib(N) -> fib(N-1) + fib(N-2). This is a SPECTACULARLY INEFFICIENT way to compute fibonacci numbers in *any* programming language. I have failed to convert this function _without_ put/get to be able to compute even fib(100) within reasonable period of time (I guess I did it wrong so that tail recursion was not here). It's not a tail recursion issue. It is simply that the computation you have specified does about 2**N function calls for argument N (*regardless* of language). So when you demand that fib(100) be computed that way, you are demanding about 1,267,650,600,228,229,401,496,703,205,376 (10**30) function calls. Assuming a machine that could do 10**10 function calls per second, that's 10**20 seconds, or about three million million years. Is there a way to compute fib(N), N>=100 without side effects? Yes, and it's easy. In fact, there are two ways. The simple obvious tail recursion does O(N) integer additions. The less obvious way treats it as a matrix exponentiation problem and does O(lg N) 2x2 integer matrix multiplies. fib(0) -> 1; fib(N) when N > 0 -> fib(N, 1, 1). fib(1, X, _) -> X; fib(N, X, Y) -> fib(N-1, X+Y, X). This and the matrix exponentiation version generalise in fairly obvious ways to recurrences of any order. From xlcr@REDACTED Mon Mar 27 05:26:47 2006 From: xlcr@REDACTED (Nick Linker) Date: Mon, 27 Mar 2006 10:26:47 +0700 Subject: Newbie questions In-Reply-To: <200603251127.58459.paris@dc.fi.udc.es> References: <44246D83.2000402@hq.idt.net> <4424F783.6050606@mail.ru> <200603251127.58459.paris@dc.fi.udc.es> Message-ID: <44275B77.4050403@mail.ru> Javier Par?s Fern?ndez wrote: I made my version, but after posting the question :-) fib(0) -> 0; fib(1) -> 1; fib(N) -> fib(N, 0, 1). fib(I, Pr, Cu) when I =< 1 -> Cu; fib(I, Pr, Cu) -> fib(I-1, Cu, Pr + Cu). Thank you for your answer nonetheless. >However, as Kostis said, this has more to do with having two recursive calls >each time than with it being tail recursion or not. If you try to see how it >evaluated, the number of recursive calls expands exponentially. > >Regards. > Best regards, Linker Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.nospam.hopwood@REDACTED Mon Mar 27 05:45:21 2006 From: david.nospam.hopwood@REDACTED (David Hopwood) Date: Mon, 27 Mar 2006 04:45:21 +0100 Subject: Conditional compilation (was: Erlang/OTP R10B-10 has been released) In-Reply-To: <200603240507.k2O577hg440619@atlas.otago.ac.nz> References: <200603240507.k2O577hg440619@atlas.otago.ac.nz> Message-ID: <44275FD1.4010000@blueyonder.co.uk> Richard A. O'Keefe wrote: > David Hopwood replied > to my observations about conditional compilation. > I always read anything of his that I see with interest and respect, > so it was a little alarming to see that we didn't appear to agree. > My experience (mainly in C, but not specific to C) is that this > is not sufficient in practice. For example, in a recent > embedded system, I have: > > #define ENABLE_CONTROL 1 > #define ENABLE_HEATER 0 > #define ENABLE_HEATER_ALARM 0 > #define ENABLE_UV 1 > #define ENABLE_ANTISTATIC 1 > > and so on, for ~89 features. (Incidentally, it is ~61 features. I should not have counted 28 macros controlling debugging and logging that only rely on dead code elimination, not conditional compilation.) > This does not mean that there are 2^89 combinations of features > that need to be tested. The final program is going to have > *all* remaining features enabled, after any that don't work or > are redundant have been stripped out. A test with some features > disabled is only intended to be a partial test. > > Fortunately, he and I are not talking about exactly the same thing. > > I was talking about feature tests IN CODE AS DELIVERED so that it is > entirely possible that no two customers will be running the same code. > The particular example program I was talking about has now been reduced > to a possible 100,000 variants, and we have determined that most of them > do NOT in fact work. (That's because just a few of the features interact > badly.) > > David Hopwood is talking about temporarily snipping stuff out > DURING DEVELOPMENT AND TESTING. > > There are two main reasons to disable a feature: > > - because it doesn't work. Sometimes this is because the associated > hardware doesn't work yet, and sometimes the code doesn't compile or has > known bugs that would interfere with testing the rest of the system. > > - because we want to replace it with a mock/stub implementation for testing > on a development system rather than on the actual hardware. In some cases the > code would not compile on the development system because a needed library > is not implemented there. > > Note that as well as stubbing out the code that implements it, disabling a > feature sometimes needs to change code that would use that feature in other > modules. > > While I like the general idea of feature pseudo-functions, I > think that to be useful as a tool for testing parts of programs > during development (which is the main thing I use conditional > compilation for), there must be support for excluding code that > doesn't compile. > > There's a big difference here between C and Erlang. > In C, the presence or absence of a declaration in one place > can affect whether or not a later function can be compiled or not. Right; I see now that dynamic typing makes a big difference here. In C, it's often necessary to set up types and function prototypes before other code can compile. (The fact that typedefs share the same namespace as other identifiers is one reason why this is trickier even than in some other statically typed languages.) In Erlang, I think the only corresponding category of declarations that can affect the validity of other code, in a "development mode" as you describe below, are record declarations. > As long as you DON'T use the preprocessor, that's not true in Erlang. > In delivery mode, you want missing functions to be errors. > But in development mode, it may be enough if the compiler *warns* about > functions that are used or exported but not defined and automatically > plants stubs that raise an exception if called. Well, it is true in Erlang as it stands now, because calls to missing functions are errors. But yes, if there were a mode in which they are warnings, this approach would probably be sufficient to replace my use of conditional compilation, and is consistent with the pragmatics of a dynamically typed language. In addition to having the stub throw an exception, it might be a good idea to have a construct that tests for the presence of a function, returning a boolean. Note that this *cannot* be simulated in all cases by trying to call the function and catching the exception, since we may want to test for its presence *without* calling it. I just had a look at several more C/C++ programs (unfortunately, I don't have enough programs in Erlang, or other languages with conditional compilation, to draw any conclusions from them on this point). Excluding things that are totally C/C++-specific, all uses of conditional compilation fell into the following three categories: 1. Alternative implementations depending on platform or configuration. This includes replacing functions that are standard, but missing on a given platform. 2. Sanity checks on constant expressions, including invalid combinations of features ('#if !check #error ... #endif'). 3. Workarounds for differences between language or compiler versions (e.g. '#if __STDC_VERSION__ >= 199901L', '#if _MSC_VER > 1000', '#if __cplusplus'). I'm quite prepared to believe that category 3 is much less common in Erlang. Categories 1 and 2 would be better handled in a configuration language. (Category 2 checks could also be done at runtime on start-up, which has the advantage of allowing some conditions that are not compile-time constants to be tested.) > Taking the list of reasons in turn: > > * because it doesn't work > > That's a reason for not *calling* code, but not for not *compiling* it. > > * because the hardware doesn't work yet > > Agreed that you don't want to call code that uses hardware that > doesn't work yet, but it's not clear to me why you wouldn't want > to *compile* it. In fact, it might be *essential* to compile it > to make sure that it doesn't interfere with something else. > > * because the code doesn't compile > > We are not talking about the desirability of #if in C, but in Erlang. > It's much harder to write code that doesn't compile. If it > doesn't compile, I don't want it processed until it *does*, and that > means that I *don't* want it controlled by an "ENABLE" switch that > might accidentally be set. I suppose so, but note that Erlang doesn't (AFAIK) have nestable multiline comments, so if you do have code that you want to disable temporarily because it doesn't compile, the only way to do that without the preprocessor is to add '%' to the start of every line. Perhaps there should be an explicit construct for this, equivalent to C's '#if 0' idiom. > * because the code has known bugs > > This is a reason not to *call* the code, not a reason not to *compile* > it. You want to select test cases using some kind of rule-based thing > that leaves out things that test bugs you know about until you are > ready to test that those bugs have gone. > > * because we want to replace it with a mock/stub implementation > > This is the classic multiple implementations of a single interface > issue. This is where you want child modules and a configuration > management system that says "in this configuration use this child, > in that configuration use that one." > > In effect, the preprocessor does this kind of stuff *BELOW* the > language level using a very clunky and limited language. I'm saying > that it should be done *ABOVE* the language level using some > reasonable rule-based language. (Datalog?) > > * because the code would not compile on one system because > a needed library isn't there. > > So we see that > > - test case selection needs to be informed by which tests exercise > what, so that we don't (deliberately) *run* code that we don't > intend to, but that doesn't have to mean we don't *compile* it > > - installing the software in different environments may require different > implementations for some functions &c, which is very little different > from the features for installation problem. And this can be done > by child modules (which we don't have, but could do with) and some > kind of declarative configuration language. You've convinced me. (I didn't need very much convincing -- I have thought for a long time that declarative configuration languages are the Right Thing for building any nontrivial program.) I had some concern that, in cases where the variation between different versions of a module is small, that requiring it to be abstracted into a call to a child module rather than "inlined" might make the code less clear, and possibly result in some code duplication (with the maintenance problems usually associated with duplication). This would be most disruptive in cases where the conditional code uses variables from its lexically enclosing scope, which would have to be turned into function parameters. But I see from the "exclude function calls" thread that the current Erlang preprocessor can't actually be used to express conditional code within a function, anyway. And after looking through the C/C++ code mentioned above, I think the needed restructuring would be an improvement in almost all cases, and not at all onerous when writing new programs. I have some concrete suggestions for the build/configuration system, but I'll leave those to another post. -- David Hopwood From david.nospam.hopwood@REDACTED Mon Mar 27 06:06:10 2006 From: david.nospam.hopwood@REDACTED (David Hopwood) Date: Mon, 27 Mar 2006 05:06:10 +0100 Subject: Commenting out code (was: Conditional compilation) In-Reply-To: <44275FD1.4010000@blueyonder.co.uk> References: <200603240507.k2O577hg440619@atlas.otago.ac.nz> <44275FD1.4010000@blueyonder.co.uk> Message-ID: <442764B2.1020404@blueyonder.co.uk> David Hopwood wrote: > Richard A. O'Keefe wrote: > [snip] >> * because the code doesn't compile >> >> We are not talking about the desirability of #if in C, but in Erlang. >> It's much harder to write code that doesn't compile. If it >> doesn't compile, I don't want it processed until it *does*, and that >> means that I *don't* want it controlled by an "ENABLE" switch that >> might accidentally be set. > > I suppose so, but note that Erlang doesn't (AFAIK) have nestable multiline > comments, so if you do have code that you want to disable temporarily because > it doesn't compile, the only way to do that without the preprocessor is to add > '%' to the start of every line. Perhaps there should be an explicit construct for > this, equivalent to C's '#if 0' idiom. What am I talking about? This isn't a bug in Erlang, it's a feature. In C, it can be easy to miss an '#if 0', especially if it is above the page currently being displayed. A '%' at the start of each line is always visible, it does nest properly, and can be just as convenient given an 'comment/uncomment region' editor command, such as that documented in . -- David Hopwood From david.nospam.hopwood@REDACTED Mon Mar 27 06:19:10 2006 From: david.nospam.hopwood@REDACTED (David Hopwood) Date: Mon, 27 Mar 2006 05:19:10 +0100 Subject: Fibonacci function (was: Newbie questions) In-Reply-To: <200603270326.k2R3Q601420348@atlas.otago.ac.nz> References: <200603270326.k2R3Q601420348@atlas.otago.ac.nz> Message-ID: <442767BE.5070001@blueyonder.co.uk> Richard A. O'Keefe wrote: > Nick Linker wrote: > 3. Let me define the following function: > fib(0) -> 0; > fib(1) -> 1; > fib(N) -> fib(N-1) + fib(N-2). > > This is a SPECTACULARLY INEFFICIENT way to compute fibonacci numbers > in *any* programming language. Yes. See the logarithmic time algorithm at . > It's not a tail recursion issue. It is simply that the computation you > have specified does about 2**N function calls for argument N (*regardless* > of language). To be terribly pedantic, it is *possible* for a language to do automatic memoization of side-effect-free functions. OTOH, without some annotations to say which functions are worth memoizing (and to stop the memo table from growing without bound), that can very often be a pessimization. -- David Hopwood From xlcr@REDACTED Mon Mar 27 06:45:06 2006 From: xlcr@REDACTED (Nick Linker) Date: Mon, 27 Mar 2006 11:45:06 +0700 Subject: Newbie questions In-Reply-To: <200603250847.k2P8lBfH007162@spikklubban.it.uu.se> References: <200603250847.k2P8lBfH007162@spikklubban.it.uu.se> Message-ID: <44276DD2.2070503@mail.ru> Kostis Sagonas wrote: >Nick Linter wrote: > >An example would have helped us understand better what the issue is. >Currently, I get: > >Eshell V5.5 (abort with ^G) >1> math:log(...). >480.407 > Well, I get the following result: 43> math:log10(test:fib(1476)). 308.116 44> math:log10(test:fib(1477)). =ERROR REPORT==== 27-Mar-2006::10:57:47 === Error in process <0.181.0> with exit value: {badarith,[{math,log10,[16#00012D269 C3FA9D767CB55B0DDF8E6A2DE7B1D967FF8D0BE61EB16ACD02D1A53C95A370ABD95285998D226919 D95DCA54298D92C348C27E635E1690E7858060F0DC14E885F0217413C55A1F820D6EB051F87C7C73 818AC23E4A9A00A2072C08C6697A2FAD66FC7DEBEEB7A5F582D7639A431B9C99CEC6315A9ED1C652 A81A6B59A39]},{erl_eval,do_apply,5},{shell,exprs,6},{shell,eval_loop,3}]} ** exited: {badarith,[{math,log10, [211475298697902185255785861961179135570552502746803 25217495622655863402432394766663713782393252439761186467156621190833026337742520 45520741882086869936691237540043402509431087092122991804222930097654049305082429 75773774612140021599477983006713536106549441161323499077298115887067363710153036 315849480388057657]}, {erl_eval,do_apply,5}, {shell,exprs,6}, {shell,eval_loop,3}]} ** My system is Windows XP, Erlang R10B. >fib(N) -> > trunc((1/sqrt(5)) * (pow(((1+sqrt(5))/2),N) - pow(((1-sqrt(5))/2),N))). > > Good solution :-) Now I also have different idea without using recursion. It is based on the following equation [F_n ] [1 1] [F_n-1] [ ] = [ ] * [ ] [F_n-1] [1 0] [F_n-2] And we just have to calculate k-th power of the matrix [[1,1],[1,0]]. It is possible to do within O(log(k)). >PS. The performance you are experiencing in your version of fib/1 > has nothing to do with tail recursion... > Yes, exponential number of recursive calls... Thank you. I'm sorry, but most interesting question (2nd, about number of digits in an integer) remains unanswered. But I guess it is very rare problem with numbers, so if there is no answer, I will understand. Best regards, Linker Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From xlcr@REDACTED Mon Mar 27 07:04:34 2006 From: xlcr@REDACTED (Nick Linker) Date: Mon, 27 Mar 2006 12:04:34 +0700 Subject: Newbie questions In-Reply-To: <200603270326.k2R3Q601420348@atlas.otago.ac.nz> References: <200603270326.k2R3Q601420348@atlas.otago.ac.nz> Message-ID: <44277262.2000605@mail.ru> Richard A. O'Keefe wrote: > 2. It is impossible to know how many digits in an integer _efficiently_ > (i.e. with O(1) or O(log(n)), where n - number of digits). > length(number_to_list(N)) appears to have O(n). > >Anything else you can do with a number has to be at least O(n), >so *in the context of a real use* of bignum arithmetic, why does >measuring the size have to be O(lg n) rather than O(n)? > > I did exactly this: N is a number, n - is the number of digits = log(N). I have a big number N with n digits, and I am searching a way of computing n without enumerating all the digits. >This is a SPECTACULARLY INEFFICIENT way to compute fibonacci numbers >in *any* programming language. > > > Is there a way to compute fib(N), N>=100 without side effects? > >Yes, and it's easy. In fact, there are two ways. The simple obvious >tail recursion does O(N) integer additions. The less obvious way treats >it as a matrix exponentiation problem and does O(lg N) 2x2 integer >matrix multiplies. > > fib(0) -> 1; > fib(N) when N > 0 -> fib(N, 1, 1). > > fib(1, X, _) -> X; > fib(N, X, Y) -> fib(N-1, X+Y, X). > >This and the matrix exponentiation version generalise in fairly obvious >ways to recurrences of any order. > Unfortunately right after asking the question to the mailing list I got the same idea. (Unfortunately = because it is too late to get the question back). But thank you for the comprehensive explanation of my problem with recursion. Best regards, Linker Nick. From ok@REDACTED Mon Mar 27 07:23:43 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Mon, 27 Mar 2006 17:23:43 +1200 (NZST) Subject: Conditional compilation (was: Erlang/OTP R10B-10 has been released) Message-ID: <200603270523.k2R5Nhtw463068@atlas.otago.ac.nz> From ok@REDACTED Mon Mar 27 07:31:24 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Mon, 27 Mar 2006 17:31:24 +1200 (NZST) Subject: Conditional compilation (was: Erlang/OTP R10B-10 has been released) Message-ID: <200603270531.k2R5VOT9460900@atlas.otago.ac.nz> Just one comment before I go home for the night: I suppose so, but note that Erlang doesn't (AFAIK) have nestable multiline comments, I don't know any language that has. Not ones that *WORK*, anyway. I am, for example, aware of {- -} in Haskell, and I'm also aware of the trouble they had trying to get them to work, and that they failed. I'm also aware of #| |# in Common Lisp, and of the fact that they don't work either. so if you do have code that you want to disable temporarily because it doesn't compile, the only way to do that without the preprocessor is to add '%' to the start of every line. Perhaps there should be an explicit construct for this, equivalent to C's '#if 0' idiom. What this requires is *text editor* support, not language support. As I may have mentioned before, in my text editor there is a pair of comments: Ctrl-X Ctrl-[ Comment out region Ctrl-X Ctrl-] Uncomment region back in again Let your bracket-style comments be CD...DE (C=/, D=*, E=/ for C, C={, D=-, E=} for Haskell, C=#, D=|, E=E for Lisp). Then Ctrl-X Ctrl-[ inserts CD at the beginning of the region changes every "D" within the region to " D " inserts DE at the end of the region Ctrl-X Ctrl-] removes every CD or DE it sees in the region changes every " D " to "D". So to comment out a function, I do Meta-E find end of sentence, works for Erlang, if not, keep on until I do reach end of function. Ctrl-@ place marker Meta-Ctrl-R move back to beginning of function Ctrl-X Ctrl-[ That's *less* work than using the preprocessor, it *doesn't* require nesting comments, and unlike the deceitfully buggy nesting comments, it does actually work to comment out anything. Some people think the " / " sequences look a bit ugly, but it's a small price to pay for comment-out/uncomment-in that actually *works* with no special cases. What's more, I can do this for C, Prolog, Haskell, Pascal, ML, ... anything that has bracket-style comments, without any special support in any compiler. Nesting comments? They're for the birds. From tony@REDACTED Mon Mar 27 08:44:09 2006 From: tony@REDACTED (Tony Rogvall) Date: Mon, 27 Mar 2006 08:44:09 +0200 Subject: Newbie questions In-Reply-To: <44276DD2.2070503@mail.ru> References: <200603250847.k2P8lBfH007162@spikklubban.it.uu.se> <44276DD2.2070503@mail.ru> Message-ID: <55E3D4E7-D5BC-4730-BA3E-8246DDBDBC40@rogvall.com> > Good solution :-) > Now I also have different idea without using recursion. It is based > on the following equation > [F_n ] [1 1] [F_n-1] > [ ] = [ ] * [ ] > [F_n-1] [1 0] [F_n-2] > And we just have to calculate k-th power of the matrix [[1,1], > [1,0]]. It is possible to do within O(log(k)). > Here is my contrib (written many years ago, it may work :-) ffib(N) -> {UN1,_,_,_} = lucas_numbers(1,-1,N-1), UN1. lucas_numbers(P, Q, M) -> m_mult(exp({P,-Q,1,0}, M), {1,P,0,2}). exp(A, N) when tuple(A), size(A)==4, is_integer(N), N > 0 -> m_exp(A, N, {1,0,0,1}). m_exp(A, 1, P) -> m_mult(A,P); m_exp(A, N, P) when ?even(N) -> m_exp(m_sqr(A), N div 2, P); m_exp(A, N, P) -> m_exp(m_sqr(A), N div 2, m_mult(A, P)). m_mult({A11,A12,A21,A22}, {B11,B12,B21,B22}) -> { A11*B11 + A12*B21, A11*B12 + A12*B22, A21*B11 + A22*B21, A21*B12 + A22*B22 }. m_sqr({A,B,C,D}) -> BC = B*C, A_D = A + D, { A*A+BC, B*A_D, C*A_D, D*D + BC }. /Tony From taavi.talvik@REDACTED Fri Mar 24 15:11:20 2006 From: taavi.talvik@REDACTED (Taavi Talvik) Date: Fri, 24 Mar 2006 16:11:20 +0200 Subject: in-ram lists In-Reply-To: <20060324132656.GA12516@semp.x-si.org> References: <20060324122447.GA12165@semp.x-si.org> <4423EA86.20403@yahoo.fr> <20060324132656.GA12516@semp.x-si.org> Message-ID: <4feeacb14f9d72d767ac06172a64354f@elisa.ee> On Mar 24, 2006, at 3:26 PM, Damir Horvat wrote: > On Fri, Mar 24, 2006 at 01:48:06PM +0100, R?mi H?rilier wrote: > >> Don't forget to get the value returned by queue:in(Item,Queue). >> >> 1> Q1 = queue:new(). >> {[],[]} >> 2> Q2 = queue:in("item1", Q1). >> {["item1"],[]} >> 3> Q3 = queue:in("item2", Q2). >> {["item2"],["item1"]} >> 4> > > Ok, I get this. But what bothers me is, how to do this in functional > way? Why in functional way? Try erlang way;) Untested code follows: % Creates new queue (creates erlang proccess), Retuns pid, which will used as queue identifier new_queue() -> Q = queue:new(), Pid = spwan(?MODULE, queue_manager, [Q]), Pid. push(element, Queue) -> Queue ! {push, element}. shift(Queue) -> Queue ! shift, receive Response -> Response end. queue_manager(Q) -> receive {push, Element} -> Q2 = queue:in(Element, Q), queue_manager(Q2); {Pid,shift} -> {Result, Q2} = queue:out(Q) Pid ! Result, queue_manager(Q2) end. best regards, taavi From gunilla@REDACTED Mon Mar 27 09:08:42 2006 From: gunilla@REDACTED (Gunilla Arendt) Date: Mon, 27 Mar 2006 09:08:42 +0200 Subject: os_mon & alarm_handler in R10B-10 In-Reply-To: <44249B8E.2030105@hq.idt.net> References: <44246D83.2000402@hq.idt.net> <44249B8E.2030105@hq.idt.net> Message-ID: <44278F7A.5070702@erix.ericsson.se> It's a bug in os_mon, it shouldn't use get_alarms(). Thanks for the heads up. Regards, Gunilla Serge Aleynikov wrote: > For now I used the following patch to take care of this issue, but I > would be curious to hear the opinion of the OTP staff. > > Regards, > > Serge > > --- alarm_handler.erl.orig Fri Mar 24 20:08:18 2006 > +++ alarm_handler.erl Fri Mar 24 20:19:15 2006 > @@ -58,7 +58,12 @@ > %% Returns: [{AlarmId, AlarmDesc}] > %%----------------------------------------------------------------- > get_alarms() -> > - gen_event:call(alarm_handler, alarm_handler, get_alarms). > + case gen_event:which_handlers(alarm_handler) of > + [M | _] -> > + gen_event:call(alarm_handler, M, get_alarms); > + [] -> > + [] > + end. > > add_alarm_handler(Module) when atom(Module) -> > gen_event:add_handler(alarm_handler, Module, []). > > > Serge Aleynikov wrote: >> Hi, >> >> I've been experimenting with the reworked os_mon in R10B-10, and >> encountered the following issue. >> >> The documentation encourages to replace the default alarm handler with >> something more sophisticated. For that reason I created a custom >> handler - lama_alarm_h (LAMA app in jungerl), which uses >> gen_event:swap_sup_handler/3. >> >> I initiate that handler prior to starting OS_MON, and then start OS_MON. >> >> In the latest release R10B-10, OS_MON calls alarm_handler:get_alarms/0 >> upon startup. >> >> This causes the 'alarm_handler' event manager issue a call in the >> alarm_handler.erl module. However, since that handler was replaced by >> a custom alarm handler, the gen_event's call fails with >> {error, bad_module}. >> >> gen_event always dispatches a call/3 to a specific handler module >> passed as a parameter, e.g.: >> >> -----[alarm_handler.erl (line: 60)]----- >> get_alarms() -> >> gen_event:call(alarm_handler, alarm_handler, get_alarms). >> ---------------------------------------- >> >> Yet, if the alarm_handler handler was swapped by another module, the >> gen_event:call will report an error, therefore crashing OS_MON. >> >> One way to resolve this problem would be to introduce another exported >> function in gen_event: >> >> gen_event:call(EventMgrRef, Request) -> Result >> >> Can the OTP team suggest some other workaround? >> >> Serge >> > From invisio22@REDACTED Sat Mar 25 11:29:58 2006 From: invisio22@REDACTED (Eric Shun) Date: Sat, 25 Mar 2006 11:29:58 +0100 Subject: Mnesia request Message-ID: <3f9db9f20603250229j2d9214f1u@mail.gmail.com> Hello, I'm just starting to learn how to use Mnesia and I can't find how to select every rows in a table, which contain at least one field undefined... How can I do that? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ft@REDACTED Mon Mar 27 09:29:34 2006 From: ft@REDACTED (Fredrik Thulin) Date: Mon, 27 Mar 2006 09:29:34 +0200 Subject: in-ram lists In-Reply-To: <4feeacb14f9d72d767ac06172a64354f@elisa.ee> References: <20060324132656.GA12516@semp.x-si.org> <4feeacb14f9d72d767ac06172a64354f@elisa.ee> Message-ID: <200603270929.34092.ft@it.su.se> On Friday 24 March 2006 15:11, Taavi Talvik wrote: ... > Untested code follows: Noted ;) ... > Pid = spwan(?MODULE, queue_manager, [Q]), Pid = spawn(...) ... > shift(Queue) -> > Queue ! shift, make this Queue ! {self(), shift} to match the "receive {Pid, shift}" below ... > {Pid,shift} -> > {Result, Q2} = queue:out(Q) > Pid ! Result, > queue_manager(Q2) /Fredrik From joe.armstrong@REDACTED Mon Mar 27 11:08:27 2006 From: joe.armstrong@REDACTED (Joe Armstrong (AL/EAB)) Date: Mon, 27 Mar 2006 11:08:27 +0200 Subject: Newbie questions Message-ID: Hello Nick, 1) - can I compute fib(N) efficiently, and, 2) - how many (decimal) digits are there in fib(N) The answer to 1) is yes Your original version runs in O (2 ^0.69 N) time - The improved version is O(N) but you can do better than this and do this in O(ln N) (see the section marked double iteration in http://en.wikipedia.org/wiki/Fibonacci_number_program) 2) is much more difficult - you could or course just compute fib(N) then take the log - but this is cheating - so can you compute the number of decomal digits in fib(N) without actually going to the trouble of computing fib(N) - now this might be easy but it certainly is not obvious... /Joe ________________________________ From: owner-erlang-questions@REDACTED [mailto:owner-erlang-questions@REDACTED] On Behalf Of Nick Linker Sent: den 27 mars 2006 05:27 To: paris@REDACTED Cc: Erlang Questions Subject: Re: Newbie questions Javier Par?s Fern?ndez wrote: I made my version, but after posting the question :-) fib(0) -> 0; fib(1) -> 1; fib(N) -> fib(N, 0, 1). fib(I, Pr, Cu) when I =< 1 -> Cu; fib(I, Pr, Cu) -> fib(I-1, Cu, Pr + Cu). Thank you for your answer nonetheless. However, as Kostis said, this has more to do with having two recursive calls each time than with it being tail recursion or not. If you try to see how it evaluated, the number of recursive calls expands exponentially. Regards. Best regards, Linker Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pupeno@REDACTED Sat Mar 25 11:26:13 2006 From: pupeno@REDACTED (Pupeno) Date: Sat, 25 Mar 2006 11:26:13 +0100 Subject: Naming conventions or style Message-ID: <200603251126.13633.pupeno@pupeno.com> Is there some official (or not) Erlang naming conventions or styles for modules, functions, atoms, variables, records, etc ? Thanks. -- Pupeno (http://pupeno.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From pupeno@REDACTED Mon Mar 27 13:06:49 2006 From: pupeno@REDACTED (Pupeno) Date: Mon, 27 Mar 2006 13:06:49 +0200 Subject: Distributed programming Message-ID: <200603271307.14798.pupeno@pupeno.com> Hello. I have a basic question on distributed programming. My case is: I have a module called launcher which opens a tcp port and listens to connections. When a new connection is made, another process is launched to attend that connection. Now, when I think about distributing this for load balancing I see this possibilities: - Run various launchers on various computers and do the balancing through DNS. - Run one launcher on one computer and make the launched processes run on other computers. The first has the advantage of providing high-availability as well and all the processes may access the same (mnesia) database. This is something that I could possible do in C (or C++, Python or any language) using MySQL and MySQL clustering, am I wrong ? The second... the second, is it possible at all ? Can I launch a process in another node and still let it handle a local socket ? And is it possible to have a pool of nodes and launch new process in the one with less load ? Somehow I feel like I am not seeing the whole picture (or that I am missing some important Erlang feature). Can anybody enlighten me ? (reading material is welcome). -- Pupeno (http://pupeno.com) PS: When I mention servers thing about typical Internet servers: web, smtp, pop3, imap, dns, jabber, etc. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From jeremie@REDACTED Mon Mar 27 20:31:19 2006 From: jeremie@REDACTED (=?ISO-8859-1?Q?J=E9r=E9mie_Lumbroso?=) Date: Mon, 27 Mar 2006 20:31:19 +0200 Subject: Efficiency: registered process or loop-function argument? Message-ID: <2b7b425b0603271031x154fc5c6y89b76849b201b832@mail.gmail.com> Hello, I have server that spawns a new 'handler' process/function for every incoming socket. Those 'handler' process must communicate with the server process. I am wondering which of the two following solutions is best. 1. Should the server process register itself with the server atom, as in: start() -> [...] register(server, spawn(test, server_loop, [LSock])). server_loop(LSock) -> case gen_tcp:accept(LSock, 1000) of {ok, Socket} -> [...] Client_Pid = spawn(?MODULE, relay, [Socket]), gen_tcp:controlling_process(Socket, Client_Pid), server_loop(LSock); [...] end. relay(Socket) -> receive {tcp, Socket, Bin} -> Data = binary_to_list(Bin), server ! {self(), Data}, ?MODULE:relay(Socket); [...] end. 2. Should I pass the server's pid as an argument to the client loop, such as: start() -> [...] spawn(test, server_loop, [LSock]). server_loop(LSock) -> case gen_tcp:accept(LSock, 1000) of {ok, Socket} -> [...] Client_Pid = spawn(?MODULE, relay, [Socket, self()]), gen_tcp:controlling_process(Socket, Client_Pid), server_loop(LSock); [...] end. relay(Socket, Server) -> receive {tcp, Socket, Bin} -> Data = binary_to_list(Bin), Server ! {self(), Data}, ?MODULE:relay(Socket, Server); [...] end. Or rather, I realize this question is on a per-case basis mostly ... so really, what I would like to ask, is whether there are any side-effects to doing it the second way (by passing an argument). Is the overhead for an extra argument on a loop function like that big? Thanks, J?r?mie Lumbroso -------------- next part -------------- An HTML attachment was scrubbed... URL: From serge@REDACTED Mon Mar 27 20:32:12 2006 From: serge@REDACTED (Serge Aleynikov) Date: Mon, 27 Mar 2006 13:32:12 -0500 Subject: os_mon & alarm_handler in R10B-10 In-Reply-To: <44278F7A.5070702@erix.ericsson.se> References: <44246D83.2000402@hq.idt.net> <44249B8E.2030105@hq.idt.net> <44278F7A.5070702@erix.ericsson.se> Message-ID: <44282FAC.3090804@hq.idt.net> Gunilla, I believe there might be another bug in SNMP revealed by my experiments with OS_MON & OTP_MIBS. If mnesia is started *after* the snmp agent, and the snmp agent has the mibs parameter set, an attempt to initialize mib OIDs using instrumentation functions with the 'new' operation (such as otp_mib:erl_node_table(new)), leads to an ignored exception that ideally should prevent the SNMP agent from starting. Release file: ============= {release, {"dripdb", "1.0"}, {erts, "5.4.13"}, [ {kernel , "2.10.13"}, {stdlib , "1.13.12"}, {sasl , "2.1.1"}, {lama , "1.0"}, {otp_mibs, "1.0.4"}, {os_mon , "2.0"}, {snmp , "4.7.1"}, {mnesia , "4.2.5"} ] }. Config file: ============ %%------------ SNMP agent configuration ---------------------- {snmp, [{agent, [{config, [{dir, "etc/snmp/"}, {force_load, true} ]}, {db_dir, "var/snmp_db/"}, {mibs, ["mibs/priv/OTP-MIB", "mibs/priv/OTP-OS-MON-MIB"]} ] } ] } This is a trace of the error which hides the fact that there was a problem with creation of the 'erlNodeAlloc' table: (<0.126.0>) call snmpa_mib_data:call_instrumentation({me,[1,3,6,1,4,1,193,19,3,1,2,1,1,1], table_entry, erlNodeEntry, undefined, 'not-accessible', {otp_mib,erl_node_table,[]}, false, [{table_entry_with_sequence,'ErlNodeEntry'}], undefined, undefined},new) (<0.126.0>) returned from snmpa_mib_data:call_instrumentation/2 -> {'EXIT',{aborted,{node_not_running,drpdb@REDACTED}}} Therefore all the SNMP manager's calls to OIDs inside 'erlNodeTable' or 'applTable' tables fail. I can provide additional details if needed, if the information here is not sufficient. I believe the proper action to do would be not to absorb the error in the call_instrumentation function when the Operation is 'new'. I am providing the snippet of code where that exception is currently ignored: snmpa_mib_data.erl(line 1319): ============================== call_instrumentation(#me{entrytype = variable, mfa={M,F,A}}, Operation) -> ?vtrace("call instrumentation with" "~n entrytype: variable" "~n MFA: {~p,~p,~p}" "~n Operation: ~p", [M,F,A,Operation]), catch apply(M, F, [Operation | A]); ... Regards, Serge Gunilla Arendt wrote: > It's a bug in os_mon, it shouldn't use get_alarms(). > Thanks for the heads up. > > Regards, Gunilla > > > Serge Aleynikov wrote: > >> For now I used the following patch to take care of this issue, but I >> would be curious to hear the opinion of the OTP staff. >> >> Regards, >> >> Serge >> >> --- alarm_handler.erl.orig Fri Mar 24 20:08:18 2006 >> +++ alarm_handler.erl Fri Mar 24 20:19:15 2006 >> @@ -58,7 +58,12 @@ >> %% Returns: [{AlarmId, AlarmDesc}] >> %%----------------------------------------------------------------- >> get_alarms() -> >> - gen_event:call(alarm_handler, alarm_handler, get_alarms). >> + case gen_event:which_handlers(alarm_handler) of >> + [M | _] -> >> + gen_event:call(alarm_handler, M, get_alarms); >> + [] -> >> + [] >> + end. >> >> add_alarm_handler(Module) when atom(Module) -> >> gen_event:add_handler(alarm_handler, Module, []). >> >> >> Serge Aleynikov wrote: >> >>> Hi, >>> >>> I've been experimenting with the reworked os_mon in R10B-10, and >>> encountered the following issue. >>> >>> The documentation encourages to replace the default alarm handler >>> with something more sophisticated. For that reason I created a >>> custom handler - lama_alarm_h (LAMA app in jungerl), which uses >>> gen_event:swap_sup_handler/3. >>> >>> I initiate that handler prior to starting OS_MON, and then start OS_MON. >>> >>> In the latest release R10B-10, OS_MON calls >>> alarm_handler:get_alarms/0 upon startup. >>> >>> This causes the 'alarm_handler' event manager issue a call in the >>> alarm_handler.erl module. However, since that handler was replaced >>> by a custom alarm handler, the gen_event's call fails with >>> {error, bad_module}. >>> >>> gen_event always dispatches a call/3 to a specific handler module >>> passed as a parameter, e.g.: >>> >>> -----[alarm_handler.erl (line: 60)]----- >>> get_alarms() -> >>> gen_event:call(alarm_handler, alarm_handler, get_alarms). >>> ---------------------------------------- >>> >>> Yet, if the alarm_handler handler was swapped by another module, the >>> gen_event:call will report an error, therefore crashing OS_MON. >>> >>> One way to resolve this problem would be to introduce another >>> exported function in gen_event: >>> >>> gen_event:call(EventMgrRef, Request) -> Result >>> >>> Can the OTP team suggest some other workaround? >>> >>> Serge >>> >> > > -- Serge Aleynikov R&D Telecom, IDT Corp. Tel: (973) 438-3436 Fax: (973) 438-1464 serge@REDACTED From ola.a.andersson@REDACTED Mon Mar 27 21:44:38 2006 From: ola.a.andersson@REDACTED (Ola Andersson A (AL/EAB)) Date: Mon, 27 Mar 2006 21:44:38 +0200 Subject: Distributed programming Message-ID: <148408C0A2D44A41AB295D74E18399750135569B@esealmw105.eemea.ericsson.se> Maybe you should take a look at Eddie? That could give you some ideas. /OLA. > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Pupeno > Sent: den 27 mars 2006 13:07 > To: erlang-questions@REDACTED > Subject: Distributed programming > > Hello. > I have a basic question on distributed programming. My case > is: I have a module called launcher which opens a tcp port > and listens to connections. > When a new connection is made, another process is launched to > attend that connection. > Now, when I think about distributing this for load balancing > I see this > possibilities: > - Run various launchers on various computers and do the > balancing through DNS. > - Run one launcher on one computer and make the launched > processes run on other computers. > The first has the advantage of providing high-availability as > well and all the processes may access the same (mnesia) > database. This is something that I could possible do in C (or > C++, Python or any language) using MySQL and MySQL > clustering, am I wrong ? > The second... the second, is it possible at all ? Can I > launch a process in another node and still let it handle a > local socket ? And is it possible to have a pool of nodes and > launch new process in the one with less load ? > Somehow I feel like I am not seeing the whole picture (or > that I am missing some important Erlang feature). > Can anybody enlighten me ? (reading material is welcome). > -- > Pupeno (http://pupeno.com) > > PS: When I mention servers thing about typical Internet > servers: web, smtp, pop3, imap, dns, jabber, etc. > From chandrashekhar.mullaparthi@REDACTED Mon Mar 27 22:07:34 2006 From: chandrashekhar.mullaparthi@REDACTED (chandru) Date: Mon, 27 Mar 2006 21:07:34 +0100 Subject: About Erlang system nodes In-Reply-To: <00ec01c651bd$b12ad180$5e0fa8c0@HP78819433158> References: <1139851452.29242.16.camel@home> <1139868771.29242.38.camel@home> <1139952724.1215.7.camel@gateway> <1143140040.25413.16.camel@gateway> <00ec01c651bd$b12ad180$5e0fa8c0@HP78819433158> Message-ID: On 27/03/06, Renyi Xiong wrote: > > But I found if we run distributed erlang over SSL, it only affects those > distributed command like spawn_link. It doesn't affect primitive command > like message passing command - '!' which we concern about. Cause that > means > if we run distributed Mnesia, it doesn't automatically have encrypted > communication between Mnesia nodes even if SSL is employed. Is that > correct? > > Thanks a lot, > Renyi. Hmm..I don't know about that. I never tried erlang dist over SSL. When you say you found out, how did you find out? Did you sniff the IP traffic between the nodes and successfully decode them? cheers Chandru -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulf.wiger@REDACTED Tue Mar 28 00:56:01 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Tue, 28 Mar 2006 00:56:01 +0200 Subject: obscure erlang-related publication Message-ID: I came across this publication: "A comparison of six languages for system level description of telecom applications" Jantsch, Kumar, Sander et al. http://www.imit.kth.se/~axel/papers/2000/comparison.pdf I didn't see that under erlang.se/publications/ Was this paper known to others on this list? "Abstract: Based on a systematic evaluation method with a large number of criteria we compare six languages with respect to the suitability as a system specification and description language for telecom applications. The languages under evaluation are VHDL, C++, SDL, Haskell, Erlang, and ProGram. The evaluation method allows to give specific emphasis on particular aspects in a controlled way, which we use to make separate comparisons for pure software systems, pure hardware systems and mixed HW/SW systems." To cut to the chase, Erlang stands up well. C++ fares "surprisingly" poorly as a specification language, and the whole thing is distilled into the following little table: Context suitable languages ---------------- ------------------------- Control software Erlang, VHDL, SDL mixed HW/SW Erlang, Haskell, VHDL, SDL pure functional C++, Haskell, VHDL pure HW Haskell, VHDL, SDL simple HW Erlang, VHDL Unsurprisingly, Erlang gets poor marks on data modeling, typing, timing modeling and structural details, but is deemed a good modeling language esp. for complex control software. BR, Ulf W From orbitz@REDACTED Tue Mar 28 01:25:30 2006 From: orbitz@REDACTED (orbitz@REDACTED) Date: Mon, 27 Mar 2006 18:25:30 -0500 Subject: Naming conventions or style In-Reply-To: <200603251126.13633.pupeno@pupeno.com> References: <200603251126.13633.pupeno@pupeno.com> Message-ID: <17B1BDE3-637B-4605-8584-B5B14D2A51AF@ezabel.com> Not really as far as I know. I prefer to seperate thigsn wtih _ as uppose dto teh capitalLetter way I use in python. For instance foo_handler instead of fooHandler. The language obviousl restricts you in some ways so I tend to go with the most natural looking way. On Mar 25, 2006, at 5:26 AM, Pupeno wrote: > Is there some official (or not) Erlang naming conventions or styles > for > modules, functions, atoms, variables, records, etc ? > Thanks. > -- > Pupeno (http://pupeno.com) From ft@REDACTED Tue Mar 28 08:16:05 2006 From: ft@REDACTED (Fredrik Thulin) Date: Tue, 28 Mar 2006 08:16:05 +0200 Subject: Efficiency: registered process or loop-function argument? In-Reply-To: <2b7b425b0603271031x154fc5c6y89b76849b201b832@mail.gmail.com> References: <2b7b425b0603271031x154fc5c6y89b76849b201b832@mail.gmail.com> Message-ID: <200603280816.05325.ft@it.su.se> On Monday 27 March 2006 20:31, J?r?mie Lumbroso wrote: > Hello, > > I have server that spawns a new 'handler' process/function for every > incoming socket. > > Those 'handler' process must communicate with the server process. I > am wondering which of the two following solutions is best. ... > Or rather, I realize this question is on a per-case basis mostly ... > so really, what I would like to ask, is whether there are any > side-effects to doing it the second way (by passing an argument). Is > the overhead for an extra argument on a loop function like that big? I bet your biggest problem will be that your 1000 connection handlers all send data to one single process ('server' in your example). The problem wouldn't be the cost of doing a named-process lookup versus passing a Pid argument around. You would very efficiently remove all the parallell aspects of your program, and (based on own experience) end up with a very slow serialized program where the cost of registered-name vs. Pid would probably not even be measurable. Seek high performace elsewhere, preferably with experimentation and measurements of different designs. /Fredrik From yani.dzhurov@REDACTED Tue Mar 28 08:55:14 2006 From: yani.dzhurov@REDACTED (Yani Dzhurov) Date: Tue, 28 Mar 2006 09:55:14 +0300 Subject: Erlang VM Message-ID: <01fa01c65234$91097150$1500a8c0@name3d6d1f4b1d> Hi guys, I wondered how the Erlang virtual machine works with creating a lot of alike objects. This is my function for example: my_fun()-> Tree = gb_tree(), :, foo(), bar(), baz(). And if a "spawn" this function into a hundred of processes will the Erlang VM machine create hundred of gb_trees, or it will create just one instance and a hundred references to it. Sorry if my question is pretty stupid but I'm very familiar with Erlang. As far as I know in Java, if I have String a = "abc"; String b = "abc"; The Java VM will create just one object string "abc" and point both 'a' and 'b' to that object, since string is immutable and there would be no collisions with using that object. In Erlang objects are immutable also, right ? So will the Erlang VM do the same as the Java one? Thanks, Yani -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3326 bytes Desc: not available URL: From mikpe@REDACTED Mon Mar 27 13:37:03 2006 From: mikpe@REDACTED (Mikael Pettersson) Date: Mon, 27 Mar 2006 13:37:03 +0200 (MEST) Subject: [PATCH] HiPE fix for glibc-2.4 / Fedora Core 5 Message-ID: <200603271137.k2RBb3oG023493@harpo.it.uu.se> On Mon, 30 Jan 2006 11:38:10 -0500, Serge Aleynikov wrote: >We ran into the hipe compilation issue (R10B-9) on Linux Fedora related >to the newer GLIBC version. > >$ uname -a >Linux stardev1.corp.idt.net 2.6.15-1.1881_FC5.idtsmp #1 SMP PREEMPT Mon >Jan 30 02:16:35 EST 2006 i686 i686 i386 GNU/Linux > >Installed Packages >glibc.i686 2.3.90-30 > >$ make >... >/home/serge/tmp/otp_src_R10B-9/erts/obj.hybrid.beam/i686-pc-linux-gnu/hipe_x86_signal.o: >In function `my_sigaction':hipe/hipe_x86_signal.c:182: undefined >reference to `INIT' >:hipe/hipe_x86_signal.c:192: undefined reference to `__next_sigaction' >/home/serge/tmp/otp_src_R10B-9/erts/obj.hybrid.beam/i686-pc-linux-gnu/hipe_x86_signal.o: >In function `hipe_signal_init':hipe/hipe_x86_signal.c:225: undefined >reference to `INIT' >/home/serge/tmp/otp_src_R10B-9/erts/obj.hybrid.beam/i686-pc-linux-gnu/hipe_x86_signal.o: >In function `my_sigaction':hipe/hipe_x86_signal.c:182: undefined >reference to `INIT' >:hipe/hipe_x86_signal.c:192: undefined reference to `__next_sigaction' >:hipe/hipe_x86_signal.c:182: undefined reference to `INIT' >:hipe/hipe_x86_signal.c:192: undefined reference to `__next_sigaction' >collect2: ld returned 1 exit status I've tested things on FC5 final which was released a week ago, and I can confirm that the code for glibc-2.3 in hipe_x86_signal.c does work fine for glibc-2.4. The patch that's been committed to the R11 development branch is included below; it should also be applied to R10B-10. /Mikael The HiPE team --- otp-0317/erts/emulator/hipe/hipe_x86_signal.c.~1~ 2005-10-21 14:02:57.000000000 +0200 +++ otp-0317/erts/emulator/hipe/hipe_x86_signal.c 2006-03-27 12:30:08.000000000 +0200 @@ -27,7 +27,7 @@ #include #include "hipe_signal.h" -#if __GLIBC__ == 2 && __GLIBC_MINOR__ == 3 +#if __GLIBC__ == 2 && (__GLIBC_MINOR__ == 3 || __GLIBC_MINOR__ == 4) /* See comment below for glibc 2.2. */ #ifndef __USE_GNU #define __USE_GNU /* to un-hide RTLD_NEXT */ From rxiong@REDACTED Mon Mar 27 18:44:18 2006 From: rxiong@REDACTED (Renyi Xiong) Date: Mon, 27 Mar 2006 08:44:18 -0800 Subject: About Erlang system nodes References: <43A67622.9070000@hyber.org> <1139851452.29242.16.camel@home> <1139868771.29242.38.camel@home> <1139952724.1215.7.camel@gateway> <1143140040.25413.16.camel@gateway> Message-ID: <00ec01c651bd$b12ad180$5e0fa8c0@HP78819433158> But I found if we run distributed erlang over SSL, it only affects those distributed command like spawn_link. It doesn't affect primitive command like message passing command - '!' which we concern about. Cause that means if we run distributed Mnesia, it doesn't automatically have encrypted communication between Mnesia nodes even if SSL is employed. Is that correct? Thanks a lot, Renyi. ----- Original Message ----- From: "chandru" To: Cc: ; "Renyi Xiong" Sent: Friday, March 24, 2006 1:32 AM Subject: Re: About Erlang system nodes On 23/03/06, Tony Zheng wrote: > Hi Chandru > > Are there any encrypted mechanisms when Mnesia replicate tables on > different Erlang nodes? We will put Erlang nodes in different locations > on internet, we want to know if it is secure for Mnesia to replicate > tables on internet. > Thanks. You can run distributed erlang over SSL. That will encrypt all traffic between the nodes. See http://www.erlang.org/doc/doc-5.4.13/lib/ssl-3.0.11/doc/html/usersguide_frame.html for more info on how to do this. cheers Chandru From rxiong@REDACTED Tue Mar 28 02:34:18 2006 From: rxiong@REDACTED (Renyi Xiong) Date: Mon, 27 Mar 2006 16:34:18 -0800 Subject: About Erlang system nodes References: <1139851452.29242.16.camel@home> <1139868771.29242.38.camel@home> <1139952724.1215.7.camel@gateway> <1143140040.25413.16.camel@gateway> <00ec01c651bd$b12ad180$5e0fa8c0@HP78819433158> Message-ID: <010c01c651ff$59a89a90$5e0fa8c0@HP78819433158> I just don't find any clues in otp source code that the primitive command "!" is related to the communication module selected through -proto_dist option. I can sniff the ip packet to see what happens if we run erlang/mnesia over SSL. Renyi. ----- Original Message ----- From: chandru To: Renyi Xiong Cc: tzheng@REDACTED ; erlang-questions@REDACTED Sent: Monday, March 27, 2006 12:07 PM Subject: Re: About Erlang system nodes On 27/03/06, Renyi Xiong wrote: But I found if we run distributed erlang over SSL, it only affects those distributed command like spawn_link. It doesn't affect primitive command like message passing command - '!' which we concern about. Cause that means if we run distributed Mnesia, it doesn't automatically have encrypted communication between Mnesia nodes even if SSL is employed. Is that correct? Thanks a lot, Renyi. Hmm..I don't know about that. I never tried erlang dist over SSL. When you say you found out, how did you find out? Did you sniff the IP traffic between the nodes and successfully decode them? cheers Chandru -------------- next part -------------- An HTML attachment was scrubbed... URL: From tzheng@REDACTED Tue Mar 28 02:45:39 2006 From: tzheng@REDACTED (Tony Zheng) Date: Mon, 27 Mar 2006 16:45:39 -0800 Subject: SSL for Erlang Distribution Message-ID: <1143506739.7721.6.camel@gateway> If the distributed erlang nodes are run over SSL, it only affects those distributed command like spawn_link. It doesn't affect primitive command like message passing command - '!'. That means if we run distributed Mnesia, it doesn't automatically have encrypted communication between Mnesia nodes even if SSL is employed. Is that correct? Thanks a lot, Renyi. From joe.armstrong@REDACTED Tue Mar 28 09:25:01 2006 From: joe.armstrong@REDACTED (Joe Armstrong (AL/EAB)) Date: Tue, 28 Mar 2006 09:25:01 +0200 Subject: Efficiency: registered process or loop-function argument? Message-ID: Difficult - I'd use method 2. I guess it really depends upon what you want to do later. You code looks very much like http://www.sics.se/~joe/tutorials/web_server/tcp_server.erl You might like to read http://www.sics.se/~joe/tutorials/web_server/web_server.html which has some discussion about a web sever which spawns a new handler for every incoming socket. /Joe ________________________________ From: owner-erlang-questions@REDACTED [mailto:owner-erlang-questions@REDACTED] On Behalf Of J?r?mie Lumbroso Sent: den 27 mars 2006 20:31 To: erlang-questions@REDACTED Subject: Efficiency: registered process or loop-function argument? Hello, I have server that spawns a new 'handler' process/function for every incoming socket. Those 'handler' process must communicate with the server process. I am wondering which of the two following solutions is best. 1. Should the server process register itself with the server atom, as in: start() -> [...] register(server, spawn(test, server_loop , [LSock])). server_loop(LSock) -> case gen_tcp:accept(LSock, 1000) of {ok, Socket} -> [...] Client_Pid = spawn(?MODULE, relay, [Socket]), gen_tcp:controlling_process(Socket, Client_Pid), server_loop (LSock); [...] end. relay(Socket) -> receive {tcp, Socket, Bin} -> Data = binary_to_list(Bin), server ! {self(), Data}, ?MODULE:relay(Socket); [...] end. 2. Should I pass the server's pid as an argument to the client loop, such as: start() -> [...] spawn(test, server_loop, [LSock]). server_loop(LSock) -> case gen_tcp:accept(LSock, 1000) of {ok, Socket} -> [...] Client_Pid = spawn(?MODULE, relay, [Socket, self()]), gen_tcp:controlling_process(Socket, Client_Pid), server_loop (LSock); [...] end. relay(Socket, Server) -> receive {tcp, Socket, Bin} -> Data = binary_to_list(Bin), Server ! {self(), Data}, ?MODULE:relay(Socket, Server); [...] end. Or rather, I realize this question is on a per-case basis mostly ... so really, what I would like to ask, is whether there are any side-effects to doing it the second way (by passing an argument). Is the overhead for an extra argument on a loop function like that big? Thanks, J?r?mie Lumbroso -------------- next part -------------- An HTML attachment was scrubbed... URL: From vlad_dumitrescu@REDACTED Tue Mar 28 09:32:01 2006 From: vlad_dumitrescu@REDACTED (Vlad Dumitrescu) Date: Tue, 28 Mar 2006 09:32:01 +0200 Subject: Erlang VM In-Reply-To: <01fa01c65234$91097150$1500a8c0@name3d6d1f4b1d> Message-ID: Hi! > From: owner-erlang-questions@REDACTED [mailto:owner-erlang-questions@REDACTED] On Behalf Of Yani Dzhurov > > my_fun()-> > Tree = gb_tree(), > ., > foo(), > bar(), > baz(). > And if a "spawn" this function into a hundred of processes will the Erlang VM machine create hundred of gb_trees, or it will create just one instance and a hundred references to it. Sorry if my question is pretty stupid but I'm very familiar with Erlang. There will be a different gb_tree for each process. Processes have separate data heaps. Some sharing is possible [*], but that's just an optimization. Even if you do Tree = gb_tree(), spawn(fun() -> my_fun(Tree) end), With an appropriately changed my_fun, each process still receives a copy of Tree. [*] for binaries of size < 64, sent as messages. Best regards, Vlad From raimo@REDACTED Tue Mar 28 09:35:43 2006 From: raimo@REDACTED (Raimo Niskanen) Date: 28 Mar 2006 09:35:43 +0200 Subject: Erlang VM References: <01fa01c65234$91097150$1500a8c0@name3d6d1f4b1d> Message-ID: It is a very reasonable question, not a stupid. It can probably be found if you dig deep enough into some FAQ... Every process has its own data heap and stack, so data that is sent to another process gets copied. This is to keep garbage collection on separate processes independent of each other. Common exceptions that have common storage for all processes: * Non-small binaries (> 64 bytes). * Ets-tables. * The experimental Hybrid Heap Emulator has an additional heap for common data. And data that is sent to another process is placed on the shared heap. The compiler might in the future determine which data that will later be sent and allocate it on the shared heap. The garbage collector is still being worked on, though... * The system can be designed to not store large data in many processes. Instead you can have a server process that holds the data; clients make querys and only store results while they are needed. yani.dzhurov@REDACTED (Yani Dzhurov) writes: > Hi guys, > > > > I wondered how the Erlang virtual machine works with creating a lot of alike > objects. > > This is my function for example: > > my_fun()-> > > Tree = gb_tree(), > > :, > > foo(), > > bar(), > > baz(). > > > > And if a "spawn" this function into a hundred of processes will the Erlang > VM machine create hundred of gb_trees, or it will create just one instance > and a hundred references to it. Sorry if my question is pretty stupid but > I'm very familiar with Erlang. > > > > As far as I know in Java, if I have > > String a = "abc"; > > String b = "abc"; > > The Java VM will create just one object string "abc" and point both 'a' and > 'b' to that object, since string is immutable and there would be no > collisions with using that object. In Erlang objects are immutable also, > right ? So will the Erlang VM do the same as the Java one? > > > > Thanks, > > > > Yani > > > > -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From ft@REDACTED Tue Mar 28 09:42:36 2006 From: ft@REDACTED (Fredrik Thulin) Date: Tue, 28 Mar 2006 09:42:36 +0200 Subject: SSL for Erlang Distribution In-Reply-To: <1143506739.7721.6.camel@gateway> References: <1143506739.7721.6.camel@gateway> Message-ID: <200603280942.36257.ft@it.su.se> On Tuesday 28 March 2006 02:45, Tony Zheng wrote: > If the distributed erlang nodes are run over SSL, it only affects > those distributed command like spawn_link. It doesn't affect > primitive command like message passing command - '!'. That means if > we run distributed Mnesia, it doesn't automatically have encrypted > communication between Mnesia nodes even if SSL is employed. Is that > correct? No, I don't believe that to be correct. Could you please just try it (with ethereal for example, as you suggested yourselfs) and post questions based on the results of the experiments instead of based on hypotheses that seem highly unlikely? /Fredrik From ulf.wiger@REDACTED Tue Mar 28 09:52:25 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Tue, 28 Mar 2006 09:52:25 +0200 Subject: About Erlang system nodes Message-ID: This seems impossible. There is only one tcp session between two erlang nodes. All communication, be it spawn commands or pure message passing, is passed on the same link. The erlang:send/2 function (the ! operator) is implemented in the virtual machine. The VM knows which port is mapped to a given node, and sends messages through that port. If that port is opened over SSL, all communication between the two nodes will be encrypted. BR, Ulf W > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Renyi Xiong > Sent: den 27 mars 2006 18:44 > To: chandru; tzheng@REDACTED > Cc: erlang-questions@REDACTED > Subject: Re: About Erlang system nodes > > But I found if we run distributed erlang over SSL, it only > affects those distributed command like spawn_link. It doesn't > affect primitive command like message passing command - '!' > which we concern about. Cause that means if we run > distributed Mnesia, it doesn't automatically have encrypted > communication between Mnesia nodes even if SSL is employed. > Is that correct? > > Thanks a lot, > Renyi. > > ----- Original Message ----- > From: "chandru" > To: > Cc: ; "Renyi Xiong" > Sent: Friday, March 24, 2006 1:32 AM > Subject: Re: About Erlang system nodes > > > On 23/03/06, Tony Zheng wrote: > > Hi Chandru > > > > Are there any encrypted mechanisms when Mnesia replicate tables on > > different Erlang nodes? We will put Erlang nodes in > different locations > > on internet, we want to know if it is secure for Mnesia to replicate > > tables on internet. > > Thanks. > > > You can run distributed erlang over SSL. That will encrypt all traffic > between the nodes. > See > http://www.erlang.org/doc/doc-5.4.13/lib/ssl-3.0.11/doc/html/u > sersguide_frame.html > for more info on how to do this. > > cheers > Chandru > From raimo@REDACTED Tue Mar 28 10:01:50 2006 From: raimo@REDACTED (Raimo Niskanen) Date: 28 Mar 2006 10:01:50 +0200 Subject: SSL for Erlang Distribution References: <1143506739.7721.6.camel@gateway> Message-ID: When two distribute erlang nodes are connected, _all_ communication between the nodes goes over one TCP connection, encrypted or not. There is no difference between different distributed commands. A spawn|spawn_link/2,4 is just as distributed as a ! to a remote pid or {Name,Node} destination. If an operation targets a remote node that is not connected it will be autoconnected for any operation. (there are configuration flags that affect this) tzheng@REDACTED (Tony Zheng) writes: > If the distributed erlang nodes are run over SSL, it only affects those > distributed command like spawn_link. It doesn't affect primitive command > like message passing command - '!'. That means if we run distributed > Mnesia, it doesn't automatically have encrypted communication between > Mnesia nodes even if SSL is employed. Is that correct? No. Try! > > Thanks a lot, > > Renyi. -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From joe.armstrong@REDACTED Tue Mar 28 10:10:07 2006 From: joe.armstrong@REDACTED (Joe Armstrong (AL/EAB)) Date: Tue, 28 Mar 2006 10:10:07 +0200 Subject: Distributed programming Message-ID: > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Pupeno > Sent: den 27 mars 2006 13:07 > To: erlang-questions@REDACTED > Subject: Distributed programming > > Hello. > I have a basic question on distributed programming. My case > is: I have a module called launcher which opens a tcp port > and listens to connections. > When a new connection is made, another process is launched to > attend that connection. > Now, when I think about distributing this for load balancing > I see this > possibilities: > - Run various launchers on various computers and do the > balancing through DNS. Yes - easy. > - Run one launcher on one computer and make the launched > processes run on other computers. Yes - but since you open the connection one machine, and the processing is taking place on some other computer you have three possibilities: 1) tunnel all the data through the original machine to the new machine 2) migrate the live session to a new computer 3) send a re-direct to the originating computer and ask it to re-connect to the new machine 1) is ok but only provided the ratio of computation in the back-end to work done to throughput the data in the front-end is acceptable 2) is *possible* though difficult - there was an (Erlang) paper published on moving live TCP sessions between machines - but I'm not sure if the code is stable and available 3) is IMHO the best possible method. BUT it needs active participation in the client. Protocols like HTTP have redirect and "moved permanently" build into the protocol. So if you have control of the protocol use method 3. By far the best. I note this technique is used by MSN messenger - you start off by logging in to a "login server" it immediately redirects you to a "traffic server" - if you start chatting to somebody, both partners might in principle be abruptly redirected to yet another server - so you can take advantage of locality (ie it would be silly for two people in say Sweden to be chatting thorough a common server in France - in this case one would redirect both parties to a server in Sweden) > The first has the advantage of providing high-availability as > well and all the processes may access the same (mnesia) > database. This is something that I could possible do in C (or > C++, Python or any language) using Mysql and MySQL > clustering, am I wrong ? Do it is not wrong but inadvisable - every time you change languages (and I count changing between mySQL and C as a language change) you get a semantic "mismatch" between the bits. Data base operations have "transaction semantics" (or should have) - many programming languages do not. Consider the following pseudo-code fragment: foo(N) -> database(do this), <- this is a data base call ... <- some code in some programming language 1/N, <- some arithmetic ... datbase(do that). ... now doing foo(1) will end up with "this" and "that" being done to the data base. but doing foo(0) will cause only "this" to be done to the data base. Really "this" should be undone if the following computation failed. In other words, code and data base transactions are not composable. This is a consequence of mixing things that have different semantics and it makes programming very difficult. Ok - lets do this in Erlang. Now mnesia is written in Erlang and by judiciously trapping any exceptions in our Erlang code we can make the code and the database updates have transaction semantics. foo(N) -> mnesia:transaction( fun() -> database(do this) 1/N database(do that) end). Will either succeed in which case "this" and "that" are done - or it will fail and the data base will have its original state. So code and database updates are composable. Then there is the problem of efficiency - changing languages (from C, to MySQL to Erlang) whatever means you have to muck around changing the internal representations of all your data types - how are integers represented in C, Erlang, MySQL - answer "you're not supposed to know" but try sending an Erlang bignum to C or storing it in MySQL and you'll soon learn the slow and painful way. If you keep within the same language framework you have non of these mismatches - and you have the added benefit of only having to learn one thing. For "webby" things I go for erlang+yaws+mnesia alternative like php+apache+mySQL have me shuddering with horror - not only do I have to learn three different things but the bits don't fit together properly. Now in the Erlang case the bits fit together properly - some people call this "conceptual integrity" - but believe me, fitting things together when they are all written in the same language is bad enough but fitting them together when they are written in different unsafe languages is a pain in the thing which I am sitting upon. > The second... the second, is it possible at all ? Can I > launch a process in another node and still let it handle a > local socket ? yes - if you're mad > And is it possible to have a pool of nodes and > launch new process in the one with less load ? Yes - virtually anything is possible - even pretty easy :-) > Somehow I feel like I am not seeing the whole picture (or > that I am missing some important Erlang feature). > Can anybody enlighten me ? (reading material is welcome). Enlightenment come by building a few systems - just keep at it - after about 30 years you'll either have seen the light - or become a management consultant. Incidentally, I think you should think very carefully about the protocols and not how they are terminated - you can terminate a protocol in any language - but you cannot correct a protocol design error with the smartest and fastest compiler in the world. Having a "redirect" message in your protocol which could occur at ANY point would make your architecture much better. If you have control over the client software then life gets even nicer - you can let the client try different hosts, until it finds one that it is happy with. After all why bother with DNS if the clients can probe multiple-machines - most modern P2P systems just need DNS to bootstrap themselves - thereafter they server their own namespaces. Cheers /Joe > -- > Pupeno (http://pupeno.com) > > PS: When I mention servers thing about typical Internet > servers: web, smtp, pop3, imap, dns, jabber, etc. > From ulf.wiger@REDACTED Tue Mar 28 10:30:02 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Tue, 28 Mar 2006 10:30:02 +0200 Subject: Distributed programming Message-ID: Joe Armstrong (AL/EAB) wrote: > > > The second... the second, is it possible at all ? Can I > > launch a process in another node and still let it handle a > > local socket ? > > yes - if you're mad Well, you'd need a proxy process handling the port. Unless things have changed in the past few years, a port cannot be controlled by a remote pid. /Ulf W From matthias@REDACTED Tue Mar 28 10:44:23 2006 From: matthias@REDACTED (Matthias Lang) Date: Tue, 28 Mar 2006 10:44:23 +0200 Subject: obscure erlang-related publication In-Reply-To: References: Message-ID: <17448.63335.547727.788866@antilipe.corelatus.se> Ulf Wiger (AL/EAB) writes: > "A comparison of six languages for system level > description of telecom applications" > Jantsch, Kumar, Sander et al. > http://www.imit.kth.se/~axel/papers/2000/comparison.pdf > > I didn't see that under erlang.se/publications/ Wading through it all, you get to this on page 8: | These [the results] are the results of the judgment of one | or several persons for each language. In particular there | were 2 persons to evaluate Erlang, 3 for C++, 2 for Haskell, | 4 for VHDL, 2 for SDL and 2 for ProGram. I.e. they asked between 4 and 15 people what they thought about various aspects of one or more language and then put the results in impressive-looking tables, assigned fancy abbreviations and the odd greek letter before fudging around. They conclude that you can't conclude anything from the exercise. I agree completely. Matthias From sanjaya@REDACTED Tue Mar 28 10:32:54 2006 From: sanjaya@REDACTED (Sanjaya Vitharana) Date: Tue, 28 Mar 2006 14:32:54 +0600 Subject: Large DBs, mnesia_frag ???? Message-ID: <008901c65242$35c01de0$9a0a10ac@wavenet.lk> Hi ... !!! What will be the best way to handle 3 million records (size of the record = 1K) in mnesia with 4GB RAM. Please help anyone with such experience. Currently I'm testing with HP Server with 2GB RAM (there are plenty of harddisk space). I'm using beow to create the table, but getting problems when the table getting bigger (~350000 records). mnesia:create_table(profile_db,[ {disc_copies, NodeList}, {type, ordered_set}, {index, [type, last_update_date, first_creation_date, fax_no]}, {frag_properties, [{n_fragments, 30},{n_disc_copies, 1}]}, {attributes, record_info(fields, profile_db)} ]), Problems: (little bit details added to the end of the file, but may be not sufficient, if anyone needs more details I can send) 1.) unexpected restarts by heart. I have increase the heart beat timeout from 30 to 60 & 90. It will bring me from (~100000 receords to ~350000 records). But again it comes again this time 2.) some unexpected errors, which was not happend earlier (I mean upto the current size of the DB) 2.1) {aborted,{no_exists,profile_db_frag25}} 2.2) ** exited: {timeout,{gen_server,call,[vm_prof_db_svr,db_backup_once]}} ** 2.3) error_info: {{failed,{error,{file_error, "/usr2/omni_vm_prof/db/vmdb/db/backup/db_back_2006-3-28_14-3-4.BUPTMP", enoent}}}, [{disk_log,open,1}]} 2.4) {error,{"Cannot prepare checkpoint (replica not available)", [profile_db_frag10,{{1143,528317,121399},vmdb@REDACTED}]}} 2.5) eheap_alloc: Cannot allocate 122441860 bytes of memory (of type "heap"). Aborted I have idea to changing the below properties and try, but I don't no this will be the best way or not. disc_copies -> disc_only_copies ordered_set -> set (of course I could not find any direct function for this in mnesia reference manual, are there any way ?) So Please help anyone with experience of large Data Bases. Regards, Sanjaya Vitharana ----------------------------------------------------------------------------------------------------------------------------- 1.) DB Server Started DB will be backup in 59536 secs heart: Tue Mar 28 10:28:47 2006: Erlang has closed. heart: Tue Mar 28 10:28:52 2006: Unable to kill old process, kill failed (tried multiple times). heart: Tue Mar 28 10:28:52 2006: Executed "/etc/rc.d/init.d/vm_prof.system start". Terminating. ----------------------------------------------------------------------------------------------------------------------------- 2.1) {aborted,{no_exists,profile_db_frag25}} =ERROR REPORT==== 28-Mar-2006::10:28:39 === ** Generic server vm_prof_db_svr terminating ** Last message in was {count,"main"} ** When Server state == {state,"/usr2/omni_vm_prof/db/vmdb",1400000} ** Reason for termination == ** {aborted,{no_exists,profile_db_frag25}} DB Server Starting =CRASH REPORT==== 28-Mar-2006::10:28:39 === crasher: pid: <0.52.0> registered_name: vm_prof_db_svr error_info: {aborted,{no_exists,profile_db_frag25}} initial_call: {gen,init_it, [gen_server, <0.51.0>, <0.51.0>, {local,vm_prof_db_svr}, vm_prof_db_svr, [], []]} ancestors: [<0.51.0>,<0.50.0>] messages: [] links: [<0.51.0>,<0.63.0>] dictionary: [] trap_exit: false status: running heap_size: 377 stack_size: 21 reductions: 3667 neighbours:=ERROR REPORT==== 28-Mar-2006::10:28:39 === ** Generic server vm_prof_db_svr terminating ** Last message in was {count,"main"} ** When Server state == {state,"/usr2/omni_vm_prof/db/vmdb",1400000} ** Reason for termination == ** {aborted,{no_exists,profile_db_frag25}} DB Server Starting =CRASH REPORT==== 28-Mar-2006::10:28:39 === crasher: pid: <0.52.0> registered_name: vm_prof_db_svr error_info: {aborted,{no_exists,profile_db_frag25}} initial_call: {gen,init_it, [gen_server, <0.51.0>, <0.51.0>, {local,vm_prof_db_svr}, vm_prof_db_svr, [], []]} ancestors: [<0.51.0>,<0.50.0>] messages: [] links: [<0.51.0>,<0.63.0>] dictionary: [] trap_exit: false status: running heap_size: 377 stack_size: 21 reductions: 3667 neighbours: ----------------------------------------------------------------------------------------------------------------------------- 2.2) (vmdb@REDACTED)2> gen_server:call(vm_prof_db_svr,db_backup_once). ** exited: {timeout,{gen_server,call,[vm_prof_db_svr,db_backup_once]}} ** ----------------------------------------------------------------------------------------------------------------------------- 2.3) gen_server:call(vm_prof_db_svr,db_backup_once). DB will backup in to: "/usr2/omni_vm_prof/db/vmdb/db/backup/db_back_2006-3-28_14-3-4" ok (vmdb@REDACTED)2> =CRASH REPORT==== 28-Mar-2006::14:03:04 === crasher: pid: <0.266.0> registered_name: [] error_info: {{failed,{error,{file_error, "/usr2/omni_vm_prof/db/vmdb/db/backup/db_back_2006-3-28_14-3-4.BUPTMP", enoent}}}, [{disk_log,open,1}]} initial_call: {disk_log,init,[<0.69.0>,<0.70.0>]} ancestors: [disk_log_sup,kernel_safe_sup,kernel_sup,<0.9.0>] messages: [] links: [<0.69.0>] dictionary: [] trap_exit: true status: running heap_size: 610 stack_size: 21 reductions: 598 neighbours: =SUPERVISOR REPORT==== 28-Mar-2006::14:03:04 === Supervisor: {local,disk_log_sup} Context: child_terminated Reason: {{failed,{error,{file_error, "/usr2/omni_vm_prof/db/vmdb/db/backup/db_back_2006-3-28_14-3-4.BUPTMP", enoent}}}, [{disk_log,open,1}]} Offender: [{pid,<0.266.0>}, {name,disk_log}, {mfa,{disk_log,istart_link,[<0.70.0>]}}, {restart_type,temporary}, {shutdown,1000}, {child_type,worker}] DB backup failed: {file_error,"/usr2/omni_vm_prof/db/vmdb/db/backup/db_back_2006-3-28_14-3-4.BUPTMP", enoent} =ERROR REPORT==== 28-Mar-2006::14:03:04 === Mnesia(vmdb@REDACTED): ** ERROR ** Failed to abort backup. mnesia_backup:abort_write["/usr2/omni_vm_prof/db/vmdb/db/backup/db_back_2006-3-28_14-3-4"] -> {'EXIT', {badarg, [{mnesia_backup, abort_write, 1}, {mnesia_log, abort_write, 4}, {mnesia_log, do_backup_master, 1}, {mnesia_log, backup_master, 2}]}} --------------------------------------------------------------------------------------------- 2.4) [root@REDACTED root]# /usr2/omni_vm_prof/system/bin/to_erl /usr2/omni_vm_prof/pipe/ Attaching to /usr2/omni_vm_prof/pipe/erlang.pipe.2183066 (^D to exit) vm_prof_db:db_backup("/usr2/omni_vm_prof"). DB will backup in to: "/usr2/omni_vm_prof/db/backup/db_back_2006-3-28_12-45-17" DB backup failed: {"Cannot prepare checkpoint (replica not available)", [profile_db_frag10,{{1143,528317,121399},vmdb@REDACTED}]} {error,{"Cannot prepare checkpoint (replica not available)", [profile_db_frag10,{{1143,528317,121399},vmdb@REDACTED}]}} --------------------------------------------------------------------------------------------- 2.5) (vmdb@REDACTED)13> mnesia:table_info(profile_db,size). Crash dump was written to: erl_crash.dump eheap_alloc: Cannot allocate 122441860 bytes of memory (of type "heap"). Aborted You have new mail in /var/spool/mail/root [root@REDACTED vmdb]# --------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulf.wiger@REDACTED Tue Mar 28 10:57:50 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Tue, 28 Mar 2006 10:57:50 +0200 Subject: obscure erlang-related publication Message-ID: Matthias Lang wrote: > > Wading through it all, you get to this on page 8: > > | These [the results] are the results of the judgment of one > | or several persons for each language. In particular there > | were 2 persons to evaluate Erlang, 3 for C++, 2 for Haskell, > | 4 for VHDL, 2 for SDL and 2 for ProGram. > > I.e. they asked between 4 and 15 people what they thought > about various aspects of one or more language and then put > the results in impressive-looking tables, assigned fancy > abbreviations and the odd greek letter before fudging around. > > They conclude that you can't conclude anything from the exercise. Actually, that's not what they write: "We have not eliminated subjectivity and we cannot suggest a final conclusion but we have analysed different strengths and weaknesses of the languages and we have established causal relations between assumptions and evaluation results due to a systematic evaluation method. [...] However, we have shown a way to make an evaluation transparent and subject to detailed analysis and discussion by making all the assumptions and priorities as explicit as possible." That is, they propose a method of comparing languages that is at least somewhat more structured and transparent than the usual hand-waving approach. For one thing, this method allows you to wade a bit deeper, look into their tables and evaluation criteria and highlight the parts where you disagree, or you think that they could manage to be less subjective. (: BR, Ulf W From bmk@REDACTED Tue Mar 28 11:10:48 2006 From: bmk@REDACTED (Micael Karlberg) Date: Tue, 28 Mar 2006 11:10:48 +0200 Subject: os_mon & alarm_handler in R10B-10 In-Reply-To: <44282FAC.3090804@hq.idt.net> References: <44246D83.2000402@hq.idt.net> <44249B8E.2030105@hq.idt.net> <44278F7A.5070702@erix.ericsson.se> <44282FAC.3090804@hq.idt.net> Message-ID: <4428FD98.8060201@erix.ericsson.se> Hi, The new (and delete) function is an optional one. Also there is no defined return value for this function. Therefor it's not worth the effort to try to figure if the result is ok or not. /BMK Serge Aleynikov wrote: > Gunilla, > > I believe there might be another bug in SNMP revealed by my experiments > with OS_MON & OTP_MIBS. If mnesia is started *after* the snmp agent, > and the snmp agent has the mibs parameter set, an attempt to initialize > mib OIDs using instrumentation functions with the 'new' operation (such > as otp_mib:erl_node_table(new)), leads to an ignored exception that > ideally should prevent the SNMP agent from starting. > > Release file: > ============= > {release, {"dripdb", "1.0"}, {erts, "5.4.13"}, > [ > {kernel , "2.10.13"}, > {stdlib , "1.13.12"}, > {sasl , "2.1.1"}, > {lama , "1.0"}, > {otp_mibs, "1.0.4"}, > {os_mon , "2.0"}, > {snmp , "4.7.1"}, > {mnesia , "4.2.5"} > ] > }. > > Config file: > ============ > > %%------------ SNMP agent configuration ---------------------- > {snmp, > [{agent, > [{config, [{dir, "etc/snmp/"}, > {force_load, true} > ]}, > {db_dir, "var/snmp_db/"}, > {mibs, ["mibs/priv/OTP-MIB", > "mibs/priv/OTP-OS-MON-MIB"]} > ] > } > ] > } > > This is a trace of the error which hides the fact that there was a > problem with creation of the 'erlNodeAlloc' table: > > (<0.126.0>) call > snmpa_mib_data:call_instrumentation({me,[1,3,6,1,4,1,193,19,3,1,2,1,1,1], > table_entry, > erlNodeEntry, > undefined, > 'not-accessible', > {otp_mib,erl_node_table,[]}, > false, > [{table_entry_with_sequence,'ErlNodeEntry'}], > undefined, > undefined},new) > (<0.126.0>) returned from snmpa_mib_data:call_instrumentation/2 -> > {'EXIT',{aborted,{node_not_running,drpdb@REDACTED}}} > > Therefore all the SNMP manager's calls to OIDs inside 'erlNodeTable' or > 'applTable' tables fail. > > I can provide additional details if needed, if the information here is > not sufficient. I believe the proper action to do would be not to > absorb the error in the call_instrumentation function when the Operation > is 'new'. I am providing the snippet of code where that exception is > currently ignored: > > snmpa_mib_data.erl(line 1319): > ============================== > call_instrumentation(#me{entrytype = variable, mfa={M,F,A}}, Operation) -> > ?vtrace("call instrumentation with" > "~n entrytype: variable" > "~n MFA: {~p,~p,~p}" > "~n Operation: ~p", > [M,F,A,Operation]), > catch apply(M, F, [Operation | A]); > ... > > > Regards, > > Serge > > > Gunilla Arendt wrote: > >> It's a bug in os_mon, it shouldn't use get_alarms(). >> Thanks for the heads up. >> >> Regards, Gunilla >> >> >> Serge Aleynikov wrote: >> >>> For now I used the following patch to take care of this issue, but I >>> would be curious to hear the opinion of the OTP staff. >>> >>> Regards, >>> >>> Serge >>> >>> --- alarm_handler.erl.orig Fri Mar 24 20:08:18 2006 >>> +++ alarm_handler.erl Fri Mar 24 20:19:15 2006 >>> @@ -58,7 +58,12 @@ >>> %% Returns: [{AlarmId, AlarmDesc}] >>> %%----------------------------------------------------------------- >>> get_alarms() -> >>> - gen_event:call(alarm_handler, alarm_handler, get_alarms). >>> + case gen_event:which_handlers(alarm_handler) of >>> + [M | _] -> >>> + gen_event:call(alarm_handler, M, get_alarms); >>> + [] -> >>> + [] >>> + end. >>> >>> add_alarm_handler(Module) when atom(Module) -> >>> gen_event:add_handler(alarm_handler, Module, []). >>> >>> >>> Serge Aleynikov wrote: >>> >>>> Hi, >>>> >>>> I've been experimenting with the reworked os_mon in R10B-10, and >>>> encountered the following issue. >>>> >>>> The documentation encourages to replace the default alarm handler >>>> with something more sophisticated. For that reason I created a >>>> custom handler - lama_alarm_h (LAMA app in jungerl), which uses >>>> gen_event:swap_sup_handler/3. >>>> >>>> I initiate that handler prior to starting OS_MON, and then start >>>> OS_MON. >>>> >>>> In the latest release R10B-10, OS_MON calls >>>> alarm_handler:get_alarms/0 upon startup. >>>> >>>> This causes the 'alarm_handler' event manager issue a call in the >>>> alarm_handler.erl module. However, since that handler was replaced >>>> by a custom alarm handler, the gen_event's call fails with >>>> {error, bad_module}. >>>> >>>> gen_event always dispatches a call/3 to a specific handler module >>>> passed as a parameter, e.g.: >>>> >>>> -----[alarm_handler.erl (line: 60)]----- >>>> get_alarms() -> >>>> gen_event:call(alarm_handler, alarm_handler, get_alarms). >>>> ---------------------------------------- >>>> >>>> Yet, if the alarm_handler handler was swapped by another module, the >>>> gen_event:call will report an error, therefore crashing OS_MON. >>>> >>>> One way to resolve this problem would be to introduce another >>>> exported function in gen_event: >>>> >>>> gen_event:call(EventMgrRef, Request) -> Result >>>> >>>> Can the OTP team suggest some other workaround? >>>> >>>> Serge >>>> >>> >> >> > From ulf.wiger@REDACTED Tue Mar 28 11:20:56 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Tue, 28 Mar 2006 11:20:56 +0200 Subject: obscure erlang-related publication Message-ID: Anyway, my purpose of calling attention to this paper is that it wasn't listed under 'publications' at erlang.se. My impression from reading through it was that it is of acceptable quality (if not revolutionary), and that its conclusions are well in line with other papers, e.g. comparing Erlang and SDL. I've frequently (at least a few times every year) had reason to explain to people that one reason why programming in Erlang is productive is that it is at roughly the same level of abstraction as mainstream modeling languages (such as SDL and UML). One more reference to a study that reaches the same conclusion probably won't hurt. BR, Ulf W > -----Original Message----- > From: Matthias Lang [mailto:matthias@REDACTED] > Sent: den 28 mars 2006 10:44 > To: Ulf Wiger (AL/EAB) > Cc: erlang-questions@REDACTED > Subject: Re: obscure erlang-related publication > > Ulf Wiger (AL/EAB) writes: > > > "A comparison of six languages for system level > > description of telecom applications" > > Jantsch, Kumar, Sander et al. > > http://www.imit.kth.se/~axel/papers/2000/comparison.pdf > > > > I didn't see that under erlang.se/publications/ > > Wading through it all, you get to this on page 8: > > | These [the results] are the results of the judgment of one > | or several persons for each language. In particular there > | were 2 persons to evaluate Erlang, 3 for C++, 2 for Haskell, > | 4 for VHDL, 2 for SDL and 2 for ProGram. > > I.e. they asked between 4 and 15 people what they thought > about various aspects of one or more language and then put > the results in impressive-looking tables, assigned fancy > abbreviations and the odd greek letter before fudging around. > > They conclude that you can't conclude anything from the exercise. > > I agree completely. > > Matthias > From rlenglet@REDACTED Tue Mar 28 11:33:46 2006 From: rlenglet@REDACTED (Romain Lenglet) Date: Tue, 28 Mar 2006 18:33:46 +0900 Subject: Port driver communication witout copy of binaries Message-ID: <200603281833.46399.rlenglet@users.forge.objectweb.org> Hi, I have the following need: I want to wrap C functions in Erlang. Those functions get big binaries as input parameters, and return big binaries, among other kinds of data. For efficiency, I would like to avoid to copy those binaries around when communicating. Therefore, I am forced to implement a C port driver, since this is the only available mechanism that does not create a separate system process (and hence does not require inter-process data copy when communicating). If I needed only to send one binary in every message, that would be OK, e.g.: % in Erlang: Binary = <<...>>, port_command(Port, Binary), // in the C port implem: void myoutput(ErlDrvData drv_data, char *buf, int len) { ... } I guess that the Binary is not copied, and its data in the Erlang heap is directly pointed by the *buf argument. By the way, is that true??? Sending binaries that way is what is done in prim_inet for sending IP data, so I guess that no copy is done here. However, I want to send and receive more complex data, which must be manipulated by the driver, typically a tuple of simple terms and binaries (which may be large), e.g. the tuple: Tuple = {ContextHandle, QopReq, Message} %% ContextHandle = small binary() %% QopReq = integer() | atom() %% Message = large binary() Such a tuple cannot be passed to port_command(Port, Tuple), since it is not an IO list. And if I encode it into a binary, by calling encode(Tuple), I guess that the binaries in the tuple will get copied in the process (can anybody confirm this?). I have the same problem in the Driver -> Erlang direction, e.g. to send the tuple: Tuple = {MajorStatus, MinorStatus, ConfState, QopState, OutputMessage} %% MajorStatus = integer() %% MinorStatus = integer() %% ConfState = bool() %% QopState = integer() %% OutputMessage = large binary() I hope that using the driver_output_term() C function and the ErlDrvTermData construction technique, the binary data in the tuple above will not be copied. Can anybody confirm this? Is there any clean solution to my problem? Or am I doomed to write my own BIFs and use my custom erts? Or to send data in multiple messages, in sequence? I dream of a way to extend the BIFs list at runtime, by loading native libraries dynamically... -- Romain LENGLET From raimo@REDACTED Tue Mar 28 13:42:01 2006 From: raimo@REDACTED (Raimo Niskanen) Date: 28 Mar 2006 13:42:01 +0200 Subject: Port driver communication witout copy of binaries References: <200603281833.46399.rlenglet@users.forge.objectweb.org> Message-ID: What you really want to do can not be done (as far as I know) but you might get it done with some tricks... To avoid copying your driver must implement the ->outputv() entry point and you must send it I/O lists being lists of binaries (might even be an improper list, that is a binary in the tail). You will have to map your tuples into that. If you send [1,<<2,3,4>>,5,6|<<7,8>>] to the driver, void (*outputv)(ErlDrvData drv_data, ErlIOVec *ev) will get: ev->iov[0].iov_len = 1; ev->iov[0].iov_base -> {1}; ev->binv[0] = NULL; ev->iov[1].iov_len = 3; ev->iov[1].iov_base -> ev->binv[1]->orig_bytes; ev->binv[1]->orig_size = 3; ev->binv[1]->orig_bytes = {2,3,4}; ev->iov[2].iov_len = 2; ev->iov[2].iov_base -> {5,6}; ev->binv[2] = NULL; ev->iov[3].iov_len = 2; ev->iov[3].iov_base -> ev->binv[3]->orig_bytes; ev->binv[3]->orig_size = 2; ev->binv[3]->orig_bytes = {7,8}; approximately, excuse my syntax :-) Binaries will be binaries and intermediate bytes will be loose vectors. If your driver wants to hang on to the data, it will have to use the reference count in the binary to avoid premature freeing. To send data back without copying your driver will have to use driver_outputv() and it arrives to erlang as a header list of integers followed by a list of binaries. Conversion to tuple format will have to be done in erlang. Keep on dreaming... Have a look at efile_drv.c in the sources... rlenglet@REDACTED (Romain Lenglet) writes: > Hi, > > > I have the following need: I want to wrap C functions in Erlang. > Those functions get big binaries as input parameters, and return > big binaries, among other kinds of data. > For efficiency, I would like to avoid to copy those binaries > around when communicating. Therefore, I am forced to implement a > C port driver, since this is the only available mechanism that > does not create a separate system process (and hence does not > require inter-process data copy when communicating). > > If I needed only to send one binary in every message, that would > be OK, e.g.: > > % in Erlang: > Binary = <<...>>, > port_command(Port, Binary), > > // in the C port implem: > void myoutput(ErlDrvData drv_data, char *buf, int len) { > ... > } > > I guess that the Binary is not copied, and its data in the Erlang > heap is directly pointed by the *buf argument. > By the way, is that true??? Sending binaries that way is what is > done in prim_inet for sending IP data, so I guess that no copy > is done here. > > > However, I want to send and receive more complex data, which must > be manipulated by the driver, typically a tuple of simple terms > and binaries (which may be large), e.g. the tuple: > Tuple = {ContextHandle, QopReq, Message} > %% ContextHandle = small binary() > %% QopReq = integer() | atom() > %% Message = large binary() > > Such a tuple cannot be passed to port_command(Port, Tuple), since > it is not an IO list. And if I encode it into a binary, by > calling encode(Tuple), I guess that the binaries in the tuple > will get copied in the process (can anybody confirm this?). > > > I have the same problem in the Driver -> Erlang direction, e.g. > to send the tuple: > Tuple = {MajorStatus, MinorStatus, ConfState, QopState, > OutputMessage} > %% MajorStatus = integer() > %% MinorStatus = integer() > %% ConfState = bool() > %% QopState = integer() > %% OutputMessage = large binary() > I hope that using the driver_output_term() C function and the > ErlDrvTermData construction technique, the binary data in the > tuple above will not be copied. Can anybody confirm this? > > > > Is there any clean solution to my problem? Or am I doomed to > write my own BIFs and use my custom erts? Or to send data in > multiple messages, in sequence? > > I dream of a way to extend the BIFs list at runtime, by loading > native libraries dynamically... > > -- > Romain LENGLET -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From mark@REDACTED Tue Mar 28 15:24:52 2006 From: mark@REDACTED (Mark Lee) Date: Tue, 28 Mar 2006 14:24:52 +0100 Subject: Trouble with gen_server Message-ID: <20060328132452.GA31790@unhinged.eclipse.co.uk> My understanding of the whole OTP architecture is pretty lacking I'm afraid. I'm having trouble with a gen_server. I'm making a call (as opposed to a cast) and providing a timeout. The timeout's being reached and the whole gen_server then falls over. What am I missing? Related to this I'm sure is the case where I might be running a tree of servers and one mis-spelled call to the erlang shell will bring the whole lot crashing down. Can someone enlighten me please, thanks, Mark From mark@REDACTED Tue Mar 28 15:50:11 2006 From: mark@REDACTED (Mark Lee) Date: Tue, 28 Mar 2006 14:50:11 +0100 Subject: SV: Trouble with gen_server In-Reply-To: References: <20060328132452.GA31790@unhinged.eclipse.co.uk> Message-ID: <20060328135011.GA22019@geneity.co.uk> O nTue, Mar 28, 2006 at 03:31:38PM +0200, Lennart ?hman wrote: > Hi, if you include a code example it becomes easier to help :-) > > /Lennart Ok, as simple as I can get. 1> test:start_link(). {ok,<0.33.0>} 2> test:test(). ** exited: {timeout,{gen_server,call,[test,test]}} ** 3> test:test(). ** exited: {noproc,{gen_server,call,[test,test]}} ** or 1> test:start_link(). {ok,<0.33.0>} 2> oops(). =ERROR REPORT==== 28-Mar-2006::14:45:20 === Error in process <0.31.0> with exit value: {undef,[{shell_default,oops,[]},{erl_eval,do_apply,5},{shell,exprs,6},{shell,eval_loop,3}]} ** exited: {undef,[{shell_default,oops,[]}, {erl_eval,do_apply,5}, {shell,exprs,6}, {shell,eval_loop,3}]} ** 3> test:test(). ** exited: {noproc,{gen_server,call,[test,test]}} ** Here's the code: -export([start_link/0]). -export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3, test/0]). -record(state, {}). -define(SERVER, ?MODULE). start_link() -> gen_server:start_link({local, ?SERVER}, ?MODULE, [], []). init([]) -> {ok, #state{}}. handle_call(test, _From, State) -> {noreply, State, 3000}; handle_call(_Request, _From, State) -> Reply = ok, {reply, Reply, State}. handle_cast(_Msg, State) -> {noreply, State}. handle_info(_Info, State) -> {noreply, State}. terminate(_Reason, _State) -> ok. code_change(_OldVsn, State, _Extra) -> {ok, State}. test() -> gen_server:call(test, test). From Lennart.Ohman@REDACTED Tue Mar 28 16:09:57 2006 From: Lennart.Ohman@REDACTED (=?iso-8859-1?Q?Lennart_=D6hman?=) Date: Tue, 28 Mar 2006 16:09:57 +0200 Subject: SV: SV: Trouble with gen_server References: <20060328132452.GA31790@unhinged.eclipse.co.uk> <20060328135011.GA22019@geneity.co.uk> Message-ID: Ok, now I see. The reason is actually not related to your (miss)understanding of OTP, but rather to how the Erlang shell works. What happens is that by using gen_server:start_link you create a process link between the "parent" process and the newly created server process. This is normally what you want if the server is started by a supervisor. But now you start the server manually from the shell, which is fine so far... But then you experience a runtime error in the shell. Because there is a standard timeout in the gen_server:call/3 function not waiting for more than 5 seconds (if I remember correctly). Since gen_server:call/3 is executed in your shell process, the shell process (or more precisely a help process to it) terminates. Links are bidirectional, making your server experience it as its supervisor has terminated. Hence it terminates! Try adding a start API for manual start using gen_server:start instead = no link. Best Regards /Lennart ------------------------------------------------------------- Lennart Ohman phone : +46-8-587 623 27 Sj?land & Thyselius Telecom AB cellular: +46-70-552 6735 Sehlstedtsgatan 6 fax : +46-8-667 8230 SE-115 28 STOCKHOLM, SWEDEN email : lennart.ohman@REDACTED ________________________________ Fr?n: owner-erlang-questions@REDACTED genom Mark Lee Skickat: ti 2006-03-28 15:50 Till: erlang-questions@REDACTED ?mne: Re: SV: Trouble with gen_server O nTue, Mar 28, 2006 at 03:31:38PM +0200, Lennart ?hman wrote: > Hi, if you include a code example it becomes easier to help :-) > > /Lennart Ok, as simple as I can get. 1> test:start_link(). {ok,<0.33.0>} 2> test:test(). ** exited: {timeout,{gen_server,call,[test,test]}} ** 3> test:test(). ** exited: {noproc,{gen_server,call,[test,test]}} ** or 1> test:start_link(). {ok,<0.33.0>} 2> oops(). =ERROR REPORT==== 28-Mar-2006::14:45:20 === Error in process <0.31.0> with exit value: {undef,[{shell_default,oops,[]},{erl_eval,do_apply,5},{shell,exprs,6},{shell,eval_loop,3}]} ** exited: {undef,[{shell_default,oops,[]}, {erl_eval,do_apply,5}, {shell,exprs,6}, {shell,eval_loop,3}]} ** 3> test:test(). ** exited: {noproc,{gen_server,call,[test,test]}} ** Here's the code: -export([start_link/0]). -export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3, test/0]). -record(state, {}). -define(SERVER, ?MODULE). start_link() -> gen_server:start_link({local, ?SERVER}, ?MODULE, [], []). init([]) -> {ok, #state{}}. handle_call(test, _From, State) -> {noreply, State, 3000}; handle_call(_Request, _From, State) -> Reply = ok, {reply, Reply, State}. handle_cast(_Msg, State) -> {noreply, State}. handle_info(_Info, State) -> {noreply, State}. terminate(_Reason, _State) -> ok. code_change(_OldVsn, State, _Extra) -> {ok, State}. test() -> gen_server:call(test, test). -------------- next part -------------- An HTML attachment was scrubbed... URL: From serge@REDACTED Tue Mar 28 16:22:06 2006 From: serge@REDACTED (Serge Aleynikov) Date: Tue, 28 Mar 2006 09:22:06 -0500 Subject: os_mon & alarm_handler in R10B-10 In-Reply-To: <4428FD98.8060201@erix.ericsson.se> References: <44246D83.2000402@hq.idt.net> <44249B8E.2030105@hq.idt.net> <44278F7A.5070702@erix.ericsson.se> <44282FAC.3090804@hq.idt.net> <4428FD98.8060201@erix.ericsson.se> Message-ID: <4429468E.7070406@hq.idt.net> Micael, It is not directly apparent that these functions are optional for SNMP tables (especially for the OS_MON and OTP_MIBS applications, which happended to expect the data to be stored in mnesia tables). If by 'optional' you mean that it's up to the programmer to decide whether to implement them or not, it's one thing, but if the programmer decides to implement them, then, IMHO, errors in these functions should not be ignored. The docs doesn't say anything about ignoring exceptions raised in the new/delete functions: http://www.erlang.org/doc/doc-5.4.13/lib/snmp-4.7.1/doc/html/snmp_instr_functions.html#9 9.1.1 New / Delete Operations ... For tables: table_access(new [, ExtraArg1, ...]) table_access(delete [, ExtraArg1, ...]) These functions are called for each object in an MIB when the MIB is unloaded or loaded, respectively. Moreover depending on the startup order of the snmp & mnesia apps listed in the release file, the functionality of os_mon and otp_mibs will be different as was illustrated in my former email. I suggest that this either needs to be documented, or better fixed by escalating the raised exception. So, if calling os_mon_mib:load(snmp_master_agent) or otp_mib:load(snmp_master_agent) with snmp applicaion started but without mnesia running, the functions don't fail, and if mnesia is started right after these calls, this makes it pretty difficult to figure out why SNMP manager is reporting errors in SNMP queries, as all applications seem to be running as expected on the SNMP agent node. Regards, Serge Micael Karlberg wrote: > Hi, > > The new (and delete) function is an optional one. Also > there is no defined return value for this function. > Therefor it's not worth the effort to try to figure if > the result is ok or not. > > /BMK > > Serge Aleynikov wrote: > >> Gunilla, >> >> I believe there might be another bug in SNMP revealed by my >> experiments with OS_MON & OTP_MIBS. If mnesia is started *after* the >> snmp agent, and the snmp agent has the mibs parameter set, an attempt >> to initialize mib OIDs using instrumentation functions with the 'new' >> operation (such as otp_mib:erl_node_table(new)), leads to an ignored >> exception that ideally should prevent the SNMP agent from starting. >> >> Release file: >> ============= >> {release, {"dripdb", "1.0"}, {erts, "5.4.13"}, >> [ >> {kernel , "2.10.13"}, >> {stdlib , "1.13.12"}, >> {sasl , "2.1.1"}, >> {lama , "1.0"}, >> {otp_mibs, "1.0.4"}, >> {os_mon , "2.0"}, >> {snmp , "4.7.1"}, >> {mnesia , "4.2.5"} >> ] >> }. >> >> Config file: >> ============ >> >> %%------------ SNMP agent configuration ---------------------- >> {snmp, >> [{agent, >> [{config, [{dir, "etc/snmp/"}, >> {force_load, true} >> ]}, >> {db_dir, "var/snmp_db/"}, >> {mibs, ["mibs/priv/OTP-MIB", >> "mibs/priv/OTP-OS-MON-MIB"]} >> ] >> } >> ] >> } >> >> This is a trace of the error which hides the fact that there was a >> problem with creation of the 'erlNodeAlloc' table: >> >> (<0.126.0>) call >> snmpa_mib_data:call_instrumentation({me,[1,3,6,1,4,1,193,19,3,1,2,1,1,1], >> table_entry, >> erlNodeEntry, >> undefined, >> 'not-accessible', >> {otp_mib,erl_node_table,[]}, >> false, >> [{table_entry_with_sequence,'ErlNodeEntry'}], >> undefined, >> undefined},new) >> (<0.126.0>) returned from snmpa_mib_data:call_instrumentation/2 -> >> {'EXIT',{aborted,{node_not_running,drpdb@REDACTED}}} >> >> Therefore all the SNMP manager's calls to OIDs inside 'erlNodeTable' >> or 'applTable' tables fail. >> >> I can provide additional details if needed, if the information here is >> not sufficient. I believe the proper action to do would be not to >> absorb the error in the call_instrumentation function when the >> Operation is 'new'. I am providing the snippet of code where that >> exception is currently ignored: >> >> snmpa_mib_data.erl(line 1319): >> ============================== >> call_instrumentation(#me{entrytype = variable, mfa={M,F,A}}, >> Operation) -> >> ?vtrace("call instrumentation with" >> "~n entrytype: variable" >> "~n MFA: {~p,~p,~p}" >> "~n Operation: ~p", >> [M,F,A,Operation]), >> catch apply(M, F, [Operation | A]); >> ... >> >> >> Regards, >> >> Serge >> >> >> Gunilla Arendt wrote: >> >>> It's a bug in os_mon, it shouldn't use get_alarms(). >>> Thanks for the heads up. >>> >>> Regards, Gunilla >>> >>> >>> Serge Aleynikov wrote: >>> >>>> For now I used the following patch to take care of this issue, but I >>>> would be curious to hear the opinion of the OTP staff. >>>> >>>> Regards, >>>> >>>> Serge >>>> >>>> --- alarm_handler.erl.orig Fri Mar 24 20:08:18 2006 >>>> +++ alarm_handler.erl Fri Mar 24 20:19:15 2006 >>>> @@ -58,7 +58,12 @@ >>>> %% Returns: [{AlarmId, AlarmDesc}] >>>> %%----------------------------------------------------------------- >>>> get_alarms() -> >>>> - gen_event:call(alarm_handler, alarm_handler, get_alarms). >>>> + case gen_event:which_handlers(alarm_handler) of >>>> + [M | _] -> >>>> + gen_event:call(alarm_handler, M, get_alarms); >>>> + [] -> >>>> + [] >>>> + end. >>>> >>>> add_alarm_handler(Module) when atom(Module) -> >>>> gen_event:add_handler(alarm_handler, Module, []). >>>> >>>> >>>> Serge Aleynikov wrote: >>>> >>>>> Hi, >>>>> >>>>> I've been experimenting with the reworked os_mon in R10B-10, and >>>>> encountered the following issue. >>>>> >>>>> The documentation encourages to replace the default alarm handler >>>>> with something more sophisticated. For that reason I created a >>>>> custom handler - lama_alarm_h (LAMA app in jungerl), which uses >>>>> gen_event:swap_sup_handler/3. >>>>> >>>>> I initiate that handler prior to starting OS_MON, and then start >>>>> OS_MON. >>>>> >>>>> In the latest release R10B-10, OS_MON calls >>>>> alarm_handler:get_alarms/0 upon startup. >>>>> >>>>> This causes the 'alarm_handler' event manager issue a call in the >>>>> alarm_handler.erl module. However, since that handler was replaced >>>>> by a custom alarm handler, the gen_event's call fails with >>>>> {error, bad_module}. >>>>> >>>>> gen_event always dispatches a call/3 to a specific handler module >>>>> passed as a parameter, e.g.: >>>>> >>>>> -----[alarm_handler.erl (line: 60)]----- >>>>> get_alarms() -> >>>>> gen_event:call(alarm_handler, alarm_handler, get_alarms). >>>>> ---------------------------------------- >>>>> >>>>> Yet, if the alarm_handler handler was swapped by another module, >>>>> the gen_event:call will report an error, therefore crashing OS_MON. >>>>> >>>>> One way to resolve this problem would be to introduce another >>>>> exported function in gen_event: >>>>> >>>>> gen_event:call(EventMgrRef, Request) -> Result >>>>> >>>>> Can the OTP team suggest some other workaround? >>>>> >>>>> Serge From matthias@REDACTED Tue Mar 28 16:20:35 2006 From: matthias@REDACTED (Matthias Lang) Date: Tue, 28 Mar 2006 16:20:35 +0200 Subject: SV: Trouble with gen_server In-Reply-To: <20060328135011.GA22019@geneity.co.uk> References: <20060328132452.GA31790@unhinged.eclipse.co.uk> <20060328135011.GA22019@geneity.co.uk> Message-ID: <17449.17971.194833.242415@antilipe.corelatus.se> Mark Lee writes: > Ok, as simple as I can get. > > 1> test:start_link(). > {ok,<0.33.0>} > 2> test:test(). > ** exited: {timeout,{gen_server,call,[test,test]}} ** > 3> test:test(). > ** exited: {noproc,{gen_server,call,[test,test]}} ** Your first line starts the server AND LINKS IT TO THE SHELL. So, when your call fails, your shell process dies. And thus your gen server must die too. If you don't want that to happen, don't link the gen_server to the shell. Matthias From thomas@REDACTED Tue Mar 28 16:22:47 2006 From: thomas@REDACTED (Thomas Johnsson) Date: Tue, 28 Mar 2006 16:22:47 +0200 Subject: Can pattern variables be globally bound? In-Reply-To: <78568af10603250615t1d925999u9a7a3fbaf5c54214@mail.gmail.com> References: <78568af10603250615t1d925999u9a7a3fbaf5c54214@mail.gmail.com> Message-ID: <442946B7.9020303@skri.net> This is a messy corner in Erlang... Although same variables in patterns in function heads and case expressions means equality, 'equality-ness' does not propagate into fun()'s and list comprehensions, instead they denote new variables with the same name: f(X,Y) -> %3 F = fun(X) -> {X,X} end, %4 F(Y). %5 f2(X,L) -> %6 [ X || X <- L ]. %7 ... 10> c(pat). ./pat.erl:3: Warning: variable 'X' is unused ./pat.erl:4: Warning: variable 'X' shadowed in 'fun' ./pat.erl:6: Warning: variable 'X' is unused ./pat.erl:7: Warning: variable 'X' shadowed in generate {ok,pat} 11> pat:f(3,5). {5,5} 12> pat:f2(3,[1,2,3,4]). [1,2,3,4] 13> -- Thomas Ryan Rawson wrote: >You are correct. Essentially when X != 0, you are trying to >essentially say "b IFF X*10 == X". Which is generally not true :-) >So you get a case clause exception. > >The subject line is a little misleading, since variables can only >exist in the context of a single function "scope" - with lexical >scoping rules of course for fun()s. > >"Global" variables can be accomplished with the process dictionary or >ets or mnesia tables. There are other techniques, like using a >gen_server to maintain state across requests (using recursion/tail >recusion). > >-ryan > >On 3/25/06, Roger Price wrote: > > >>The following program: >> >>-module(test) . % 1 >>-export([test/1]) . % 2 >>test (X) -> % 3 >> case X*10 % 4 >> of 0 -> a % 5 >> ; X -> b % 6 >> end . % 7 >> >>compiles with no warnings, and provides the following output: >> >>Eshell V5.4.9 (abort with ^G) >>1> test:test(0) . >>a >>2> test:test(1) . >> >>=ERROR REPORT==== 25-Mar-2006::14:20:45 === >>Error in process <0.31.0> with exit value: >>{{case_clause,10},[{test,test,1},{shell,exprs,6},{shell,eval_loop,3}]} >> >>My understanding of the error message is that the pattern variable X on >>line 6 is already bound to the value 1, and therefore no match is possible >>for value 10. Is this correct? >> >>Roger >> >> >> > > > From matt@REDACTED Tue Mar 28 16:27:13 2006 From: matt@REDACTED (Matthew McDonnell) Date: Tue, 28 Mar 2006 15:27:13 +0100 (BST) Subject: Trouble with gen_server In-Reply-To: <20060328132452.GA31790@unhinged.eclipse.co.uk> References: <20060328132452.GA31790@unhinged.eclipse.co.uk> Message-ID: On Tue, 28 Mar 2006, Mark Lee wrote: > My understanding of the whole OTP architecture is pretty lacking I'm > afraid. I'm having trouble with a gen_server. I'm making a call (as > opposed to a cast) and providing a timeout. The timeout's being > reached and the whole gen_server then falls over. What am I missing? Hi Mark, I'm in a similar boat myself trying to learn OTP, and have found the tutorial mentioned in this thread to be quite useful: http://www.erlang.org/ml-archive/erlang-questions/200602/msg00035.html It implements a gen_server using a port to communicate with an external program, using a call to the gen_server. Cheers, Matt Matt McDonnell Email: matt@REDACTED Web: http://www.matt-mcdonnell.com/ From mark@REDACTED Tue Mar 28 16:41:30 2006 From: mark@REDACTED (mark@REDACTED) Date: Tue, 28 Mar 2006 15:41:30 +0100 Subject: SV: Trouble with gen_server In-Reply-To: <17449.17971.194833.242415@antilipe.corelatus.se> References: <20060328132452.GA31790@unhinged.eclipse.co.uk> <20060328135011.GA22019@geneity.co.uk> <17449.17971.194833.242415@antilipe.corelatus.se> Message-ID: <20060328144130.GA22136@geneity.co.uk> Ok, me being daft there. So how can I handle the timeout? I don't want the gen_server to die when the call exceeds the timeout, I just want to be able to reflect that this has happened. Thanks, Mark On Tue, Mar 28, 2006 at 04:20:35PM +0200, Matthias Lang wrote: > Mark Lee writes: > > > Ok, as simple as I can get. > > > > 1> test:start_link(). > > {ok,<0.33.0>} > > 2> test:test(). > > ** exited: {timeout,{gen_server,call,[test,test]}} ** > > 3> test:test(). > > ** exited: {noproc,{gen_server,call,[test,test]}} ** > > Your first line starts the server AND LINKS IT TO THE SHELL. > > So, when your call fails, your shell process dies. And thus your gen > server must die too. > > If you don't want that to happen, don't link the gen_server to the > shell. > > Matthias > > > > !DSPAM:442948ea186885973314758! > > From mark@REDACTED Tue Mar 28 16:49:19 2006 From: mark@REDACTED (mark@REDACTED) Date: Tue, 28 Mar 2006 15:49:19 +0100 Subject: SV: Trouble with gen_server In-Reply-To: <20060328144130.GA22136@geneity.co.uk> References: <20060328132452.GA31790@unhinged.eclipse.co.uk> <20060328135011.GA22019@geneity.co.uk> <17449.17971.194833.242415@antilipe.corelatus.se> <20060328144130.GA22136@geneity.co.uk> Message-ID: <20060328144919.GA22097@geneity.co.uk> On Tue, Mar 28, 2006 at 03:41:30PM +0100, mark@REDACTED wrote: > Ok, me being daft there. So how can I handle the timeout? I don't want > the gen_server to die when the call exceeds the timeout, I just want to > be able to reflect that this has happened. Sorry, sorry... still being daft. That solves both my problems doesn't it. Thanks very much everyone. Mark > > Thanks, > > Mark > > On Tue, Mar 28, 2006 at 04:20:35PM +0200, Matthias Lang wrote: > > Mark Lee writes: > > > > > Ok, as simple as I can get. > > > > > > 1> test:start_link(). > > > {ok,<0.33.0>} > > > 2> test:test(). > > > ** exited: {timeout,{gen_server,call,[test,test]}} ** > > > 3> test:test(). > > > ** exited: {noproc,{gen_server,call,[test,test]}} ** > > > > Your first line starts the server AND LINKS IT TO THE SHELL. > > > > So, when your call fails, your shell process dies. And thus your gen > > server must die too. > > > > If you don't want that to happen, don't link the gen_server to the > > shell. > > > > Matthias > > > > > > > > > > > > > > > > !DSPAM:44294bef188059629497355! > > From chandrashekhar.mullaparthi@REDACTED Tue Mar 28 18:45:51 2006 From: chandrashekhar.mullaparthi@REDACTED (chandru) Date: Tue, 28 Mar 2006 17:45:51 +0100 Subject: Large DBs, mnesia_frag ???? In-Reply-To: <008901c65242$35c01de0$9a0a10ac@wavenet.lk> References: <008901c65242$35c01de0$9a0a10ac@wavenet.lk> Message-ID: I'll try. On 28/03/06, Sanjaya Vitharana wrote: > > Hi ... !!! > > What will be the best way to handle 3 million records (size of the record > = 1K) in mnesia with 4GB RAM. Please help anyone with such experience. > We have an mnesia database with 25 million records in 128 fragments split across 2 erlang nodes. Server has 8GB of RAM and the two nodes use about 4GB together. Currently I'm testing with HP Server with 2GB RAM (there are plenty of > harddisk space). > > I'm using beow to create the table, but getting problems when the table > getting bigger (~350000 records). > > mnesia:create_table(profile_db,[ > {disc_copies, NodeList}, > {type, ordered_set}, > {index, [type, last_update_date, > first_creation_date, fax_no]}, > {frag_properties, [{n_fragments, 30},{n_disc_copies, > 1}]}, > {attributes, record_info(fields, profile_db)} > ]), > Bear in mind that when you have an ordered set fragmented table, each fragment is an ordered set. The property does not apply across all tables. Problems: (little bit details added to the end of the file, but may be not > sufficient, if anyone needs more details I can send) > > 1.) unexpected restarts by heart. I have increase the heart beat timeout > from 30 to 60 & 90. It will bring me from (~100000 receords to ~350000 > records). But again it comes again this time > > 2.) some unexpected errors, which was not happend earlier (I mean upto the > current size of the DB) > 2.1) {aborted,{no_exists,profile_db_frag25}} > This is strange. It seems to suggest that this fragment isn't available yet. Are all tables fully loaded before you start populating? Check using the mnesia:wait_for_tables function. 2.2) ** exited: {timeout,{gen_server,call,[vm_prof_db_svr,db_backup_once]}} > ** > The backup is taking quite a long time. Have you tried increasing the timeout? 2.3) error_info: {{failed,{error,{file_error, > > "/usr2/omni_vm_prof/db/vmdb/db/backup/db_back_2006-3-28_14-3-4.BUPTMP", > enoent}}}, > [{disk_log,open,1}]} > enoent - The temporary backup file does not exist. Dunno why. 2.4) {error,{"Cannot prepare checkpoint (replica not available)", > [profile_db_frag10,{{1143,528317,121399},vmdb@REDACTED}]}} > Looks like your fragments are spread across a few nodes and one of the fragments is not available - are all nodes connected to each other. 2.5) eheap_alloc: Cannot allocate 122441860 bytes of memory (of type > "heap"). > > Aborted > Your node ran out of memory. You seem to have quite a lot of secondary indices. Bear in mind that each one will consume more memory. It is trying to allocate about 122MB of memory. Have you tried this on a machine with 4GB of RAM. I have idea to changing the below properties and try, but I don't no this > will be the best way or not. > disc_copies -> disc_only_copies > Performance will suffer. ordered_set -> set (of course I could not find any direct function for this > in mnesia reference manual, are there any way ?) > You will have to delete the table and recreate it. hth Chandru -------------- next part -------------- An HTML attachment was scrubbed... URL: From camster@REDACTED Tue Mar 28 20:36:01 2006 From: camster@REDACTED (Richard Cameron) Date: Tue, 28 Mar 2006 19:36:01 +0100 Subject: SV: Trouble with gen_server In-Reply-To: <20060328144130.GA22136@geneity.co.uk> References: <20060328132452.GA31790@unhinged.eclipse.co.uk> <20060328135011.GA22019@geneity.co.uk> <17449.17971.194833.242415@antilipe.corelatus.se> <20060328144130.GA22136@geneity.co.uk> Message-ID: On 28 Mar 2006, at 15:41, mark@REDACTED wrote: > Ok, me being daft there. So how can I handle the timeout? I don't want > the gen_server to die when the call exceeds the timeout, I just > want to > be able to reflect that this has happened. Have a look at this: -module(s). -compile(export_all). -define(SERVER,srv). start() -> gen_server:start({local,?SERVER}, ?MODULE, [], []). start_link() -> gen_server:start_link({local,?SERVER}, ?MODULE, [], []). fail() -> gen_server:call(?SERVER, slowcall, 1). safe() -> case catch fail() of {'EXIT', Reason} -> {error, Reason}; Other -> {ok, Other} end. init([]) -> {ok, []}. handle_call(slowcall,_,State) -> receive after 100 -> x end, {reply, ok, State}. There are a few ways of running it. Here's what you were doing: Eshell V5.4.13 (abort with ^G) 1> s:start_link(). {ok,<0.37.0>} 2> s:fail(). ** exited: {timeout,{gen_server,call,[srv,slowcall,1]}} ** 3> s:fail(). ** exited: {noproc,{gen_server,call,[srv,slowcall,1]}} ** What's happening here is that you're using the shell to start your gen_server process. You end up linking the shell process to the gen_server process which means that if that the shell process dies, it takes down the gen_server with it. The default shell behaviour is that if it executes a function which fails, it exits and a new "replacement shell" process is spawned off to carry on. This is what's happening here. Your first call to s:fail() does just that - it fails. The gen_server was fine at that point... it didn't crash until it got brought down by the shell exiting (because the two processes are linked). Here's another way of running the example: 1> s:start(). {ok,<0.37.0>} 2> s:fail(). ** exited: {timeout,{gen_server,call,[srv,slowcall,1]}} ** 3> s:fail(). ** exited: {timeout,{gen_server,call,[srv,slowcall,1]}} ** Again, the fail() function fails, but this time the shell process dying doesn't cause the gen_server to terminate. At the end of this example it's still alive and well and processing requests. Another way is this: Eshell V5.4.13 (abort with ^G) 1> s:start_link(). {ok,<0.37.0>} 2> s:safe(). {error,{timeout,{gen_server,call,[srv,slowcall,1]}}} 3> s:safe(). {error,{timeout,{gen_server,call,[srv,slowcall,1]}}} This time, although the shell process is linked to the gen_server, I've caught the error from the fail() function and so the shell didn't exit. Thus, it didn't cause the gen_server to die. All three approaches are equally valid in different situations. Joe Armstrong's book (and his PhD thesis) gives examples of when you'd want to link processes together and when you wouldn't. Also, although there *is* a "catch" keyword in Erlang it's used surprisingly rarely. Catching all errors like this sometimes isn't actually terribly helpful, and it's certainly not something the language just makes you do to be awkward (religiously wrapping every possible things which could fail with a catch statement to prevent any knock-on effect for any processes linked to it - although I have seen Java programmers do things like this). You need to have a strategy of which processes should die if a call to the gen_server doesn't succeed. There could be instances where you return a "Sorry, system can't cope with the load at the moment" to the end user and press on regardless. On the other hand you might be talking about a realtime system where the gen_server can't and shouldn't take that long to reply. If that's the case then you'd probably want to ask the gen_server to crash (crashing is a _good_ thing in Erlang), shutdown cleanly, and then arrange for it to be restarted (by a supervisor) to get it into a decent state in order to continue processing requests. This is the Erlang "let it crash" approach. The OTP design principles document is probably the best starting point for all this stuff: http://erlang.se/doc/doc-5.4.12/doc/design_principles/part_frame.html Richard. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rprice@REDACTED Wed Mar 29 00:22:18 2006 From: rprice@REDACTED (Roger Price) Date: Wed, 29 Mar 2006 00:22:18 +0200 (CEST) Subject: Can pattern variables be globally bound? In-Reply-To: <442946B7.9020303@skri.net> References: <78568af10603250615t1d925999u9a7a3fbaf5c54214@mail.gmail.com> <442946B7.9020303@skri.net> Message-ID: On Tue, 28 Mar 2006, Thomas Johnsson wrote: > This is a messy corner in Erlang... Although same variables in patterns > in function heads and case expressions means equality, 'equality-ness' > does not propagate into fun()'s and list comprehensions, instead they > denote new variables with the same name: > > f(X,Y) -> %3 > F = fun(X) -> {X,X} end, %4 > F(Y). %5 > ... This is disconcerting. I had assumed that a case expression such as case Expr of (P1) -> E1 ; (P2) -> E2 end could be defined as fun (P1) -> E1 ; (P2) -> E2 end (Expr) but now it appears this is not true, since the fun introduces new variables in the patterns P1 and P2, which the case expression does not. Roger From pupeno@REDACTED Wed Mar 29 00:36:39 2006 From: pupeno@REDACTED (Pupeno) Date: Wed, 29 Mar 2006 00:36:39 +0200 Subject: Defensive programming Message-ID: <200603290036.39986.pupeno@pupeno.com> Hello, I am used to defensive programming and it's hard for me to program otherwise. Today I've found this piece of code I wrote some months ago: acceptor(tcp, Module, LSocket) -> case gen_tcp:accept(LSocket) of {ok, Socket} -> case Module:start() of {ok, Pid} -> ok = gen_tcp:controlling_process(Socket, Pid), gen_server:cast(Pid, {connected, Socket}), acceptor(tcp, Module, LSocket); {error, Error} -> {stop, {Module, LSocket, Error}} end; {error, Reason} -> {stop, {Module, LSocket, Reason}} end; is that too defensive ? should I write it this way acceptor(tcp, Module, LSocket) -> {ok, Socket} = case gen_tcp:accept(LSocket), {ok, Pid} = Module:start() ok = gen_tcp:controlling_process(Socket, Pid), gen_server:cast(Pid, {connected, Socket}), acceptor(tcp, Module, LSocket); ? Thanks. -- Pupeno (http://pupeno.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From ryanobjc@REDACTED Wed Mar 29 01:28:42 2006 From: ryanobjc@REDACTED (Ryan Rawson) Date: Tue, 28 Mar 2006 15:28:42 -0800 Subject: Defensive programming In-Reply-To: <200603290036.39986.pupeno@pupeno.com> References: <200603290036.39986.pupeno@pupeno.com> Message-ID: <78568af10603281528k986af8bx5d81b00ec8e544de@mail.gmail.com> I think Erlang is a wonderful language because it allows you to write defensively without littering your code with crap that doesn't serve any real purpose. When I say "real purpose" I mean if you can remove code without affecting the functionality or error handling, how can it be useful? In erlang you can take advantage of the fact that a unmatched pattern is a fault, and let the normal Erlang process kill/supervisor trees/error logging handle it for you. As an added bonus you know the exact value which failed to match and which function it failed in. Your "defensive" code sample doesn't have those advantages. So what I'm saying is bringing legacy-C coding style (clearly the first snippet is inspired by "check every return value if/then/else" style of C) actually obscures the true meaning of the code, AND it actually makes your system less defensive and debuggable. Best to stick with the simple second style. And this is why I love Erlang - you get to have simple code that is in fact more robust than the "robustly coded" examples. -ryan On 3/28/06, Pupeno wrote: > Hello, > I am used to defensive programming and it's hard for me to program otherwise. > Today I've found this piece of code I wrote some months ago: > > acceptor(tcp, Module, LSocket) -> > case gen_tcp:accept(LSocket) of > {ok, Socket} -> > case Module:start() of > {ok, Pid} -> > ok = gen_tcp:controlling_process(Socket, Pid), > gen_server:cast(Pid, {connected, Socket}), > acceptor(tcp, Module, LSocket); > {error, Error} -> > {stop, {Module, LSocket, Error}} > end; > {error, Reason} -> > {stop, {Module, LSocket, Reason}} > end; > > is that too defensive ? should I write it this way > > acceptor(tcp, Module, LSocket) -> > {ok, Socket} = case gen_tcp:accept(LSocket), > {ok, Pid} = Module:start() > ok = gen_tcp:controlling_process(Socket, Pid), > gen_server:cast(Pid, {connected, Socket}), > acceptor(tcp, Module, LSocket); > > ? > > Thanks. > -- > Pupeno (http://pupeno.com) > > > From orbitz@REDACTED Wed Mar 29 02:05:23 2006 From: orbitz@REDACTED (orbitz@REDACTED) Date: Tue, 28 Mar 2006 19:05:23 -0500 Subject: Defensive programming In-Reply-To: <200603290036.39986.pupeno@pupeno.com> References: <200603290036.39986.pupeno@pupeno.com> Message-ID: I would say so. In your defensive code, if your accept fails, what do you do? You the process. So how is this different than just letting the a badmatch happen and failing? Your defensive code is much more complex with no benefit. The supervisor will handle the error for you and restart or stop it depending on what you told it to do. Much simpler. On Mar 28, 2006, at 5:36 PM, Pupeno wrote: > Hello, > I am used to defensive programming and it's hard for me to program > otherwise. > Today I've found this piece of code I wrote some months ago: > > acceptor(tcp, Module, LSocket) -> > case gen_tcp:accept(LSocket) of > {ok, Socket} -> > case Module:start() of > {ok, Pid} -> > ok = gen_tcp:controlling_process(Socket, Pid), > gen_server:cast(Pid, {connected, Socket}), > acceptor(tcp, Module, LSocket); > {error, Error} -> > {stop, {Module, LSocket, Error}} > end; > {error, Reason} -> > {stop, {Module, LSocket, Reason}} > end; > > is that too defensive ? should I write it this way > > acceptor(tcp, Module, LSocket) -> > {ok, Socket} = case gen_tcp:accept(LSocket), > {ok, Pid} = Module:start() > ok = gen_tcp:controlling_process(Socket, Pid), > gen_server:cast(Pid, {connected, Socket}), > acceptor(tcp, Module, LSocket); > > ? > > Thanks. > -- > Pupeno (http://pupeno.com) From rlenglet@REDACTED Wed Mar 29 05:29:40 2006 From: rlenglet@REDACTED (Romain Lenglet) Date: Wed, 29 Mar 2006 12:29:40 +0900 Subject: Port driver communication witout copy of binaries In-Reply-To: References: <200603281833.46399.rlenglet@users.forge.objectweb.org> Message-ID: <200603291229.40339.rlenglet@users.forge.objectweb.org> Raimo Niskanen wrote: > What you really want to do can not be done (as far as I know) > but you might get it done with some tricks... > > To avoid copying your driver must implement the > ->outputv() entry point and you must send it I/O lists > being lists of binaries (might even be an improper list, > that is a binary in the tail). You will have to map > your tuples into that. > > If you send [1,<<2,3,4>>,5,6|<<7,8>>] to the driver, > void (*outputv)(ErlDrvData drv_data, ErlIOVec *ev) will get: > > ev->iov[0].iov_len = 1; > ev->iov[0].iov_base -> {1}; > ev->binv[0] = NULL; > ev->iov[1].iov_len = 3; > ev->iov[1].iov_base -> ev->binv[1]->orig_bytes; > ev->binv[1]->orig_size = 3; > ev->binv[1]->orig_bytes = {2,3,4}; > ev->iov[2].iov_len = 2; > ev->iov[2].iov_base -> {5,6}; > ev->binv[2] = NULL; > ev->iov[3].iov_len = 2; > ev->iov[3].iov_base -> ev->binv[3]->orig_bytes; > ev->binv[3]->orig_size = 2; > ev->binv[3]->orig_bytes = {7,8}; > > approximately, excuse my syntax :-) > > Binaries will be binaries and intermediate bytes > will be loose vectors. If your driver wants to > hang on to the data, it will have to use the > reference count in the binary to avoid premature freeing. Very nice! Using lists is OK for me. I don't have to stick to tuples, I used tuples only to explain my problem. > To send data back without copying your driver will > have to use driver_outputv() and it arrives to erlang as > a header list of integers followed by a list of > binaries. Conversion to tuple format will have to > be done in erlang. Yes, I wanted to use such functions, but I was not sure if binaries were copied. > Keep on dreaming... > > Have a look at efile_drv.c in the sources... Thanks a lot, I have a little hope now to do things cleanly. :) -- Romain LENGLET From tony@REDACTED Wed Mar 29 08:07:16 2006 From: tony@REDACTED (Tony Rogvall) Date: Wed, 29 Mar 2006 08:07:16 +0200 Subject: Defensive programming In-Reply-To: <200603290036.39986.pupeno@pupeno.com> References: <200603290036.39986.pupeno@pupeno.com> Message-ID: <0FC6DF8F-8F07-48EA-9F36-0271684AE69E@rogvall.com> On 29 mar 2006, at 00.36, Pupeno wrote: > Hello, > I am used to defensive programming and it's hard for me to program > otherwise. > Today I've found this piece of code I wrote some months ago: > > acceptor(tcp, Module, LSocket) -> > case gen_tcp:accept(LSocket) of > {ok, Socket} -> > case Module:start() of > {ok, Pid} -> > ok = gen_tcp:controlling_process(Socket, Pid), > gen_server:cast(Pid, {connected, Socket}), > acceptor(tcp, Module, LSocket); > {error, Error} -> > {stop, {Module, LSocket, Error}} > end; > {error, Reason} -> > {stop, {Module, LSocket, Reason}} > end; > > is that too defensive ? should I write it this way I suggest you think about the term {error,Reason} as any other term. Either you need to handle it as a result from a function call, or you do not. > > acceptor(tcp, Module, LSocket) -> > {ok, Socket} = case gen_tcp:accept(LSocket), > {ok, Pid} = Module:start() > ok = gen_tcp:controlling_process(Socket, Pid), > gen_server:cast(Pid, {connected, Socket}), > acceptor(tcp, Module, LSocket); > The may be some reasons why accpet fail. - The listen socket has closed. - Resource problems. - Timeout (if used) This may require special treat. /Tony From ok@REDACTED Wed Mar 29 08:09:14 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Wed, 29 Mar 2006 18:09:14 +1200 (NZST) Subject: obscure erlang-related publication Message-ID: <200603290609.k2T69EWA243201@atlas.otago.ac.nz> Matthias Lang wrote about the paper > "A comparison of six languages for system level > description of telecom applications" > Jantsch, Kumar, Sander et al. > http://www.imit.kth.se/~axel/papers/2000/comparison.pdf to the effect that he wasn't really impressed. Neither was I. They are comparing languages for *hardware* and software concurrent systems. - So they don't include Ada, which is a mature language with excellent tool support and good support for concurrency and structuring. (And was in 2000.) - So they include C++, and then find to their great surprise that it doesn't do concurrency. They _don't_ consider any of the parallel/distributed versions of C++. - So they include Haskell, but find to their great surprise that it doesn't do concurrency. They _don't_ consider Concurrent Haskell. (Which I am pretty sure was around in 2000. Certainly Concurrent Clean, which is pretty Haskell-like, was around then, and really was concurrent, although recent versions aren't.) - So they find, apparently to their surprise, that VHDL, which was designed for specifying hardware, is good at specifying hardware, and the other languages aren't. - They DO try to ensure that they aren't just reporting their prejudices by writing an application of the kind they care about in the several languages, but then they deliberately choose to say nothing about the code they got or their experience of writing it. Their evaluation method basically amounts to making your preconceived ideas of what kind of solution you are looking for seem respectable by wrapping (arbitrary!) numbers around them. Basically, the only thing I learned from that paper was "these people are interested in this topic". No, I tell a lie. The other thing I learned was that I have been a complete fool to myself by waiting until I actually had some results worth discussing before publishing ideas. Now I know how to get my publication count high... From valentin@REDACTED Wed Mar 29 08:33:53 2006 From: valentin@REDACTED (Valentin Micic) Date: Wed, 29 Mar 2006 08:33:53 +0200 Subject: Port driver communication witout copy of binaries References: <200603281833.46399.rlenglet@users.forge.objectweb.org> <200603291229.40339.rlenglet@users.forge.objectweb.org> Message-ID: <005b01c652fa$c060f880$0100000a@MONEYMAKER2> Is there any reason for not using: erlang:port_call( Port, Cmd, DATA ) which results in call to linked-in driver's: int edk_call( ErlDrvData handle, unsigned int cmd, char * buf, int len, char **rbuf, int rlen, unsigned int * flags ) Argument DATA specified in erlang's port_call is passed by reference. The driver may return data in rbuf. Should there be more memory required that specified by rlen, such memory may be allocated, using driver_alloc... no need to free rbuf, as it points to automatic variable. V. PS I've been using this approach for integration with some ss7 adapter... to cut the story short, it's working just fine. Documentation is indicating that this is the fastest way to interact with driver. From ulf.wiger@REDACTED Wed Mar 29 08:51:44 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Wed, 29 Mar 2006 08:51:44 +0200 Subject: obscure erlang-related publication Message-ID: Richard A. O'Keefe wrote: > > Matthias Lang wrote about the paper > > "A comparison of six languages for system level > > description of telecom applications" > > Jantsch, Kumar, Sander et al. > > http://www.imit.kth.se/~axel/papers/2000/comparison.pdf > to the effect that he wasn't really impressed. > > Neither was I. Fair enough. I wasn't very clear about merely suggesting that the article be added to the publications list, or some other list that proposes to give an overview of available research related to Erlang. This is what the text under erlang.se/publications/ says: "This page contains articles, books, papers, Powerpoint presentations, and Master's Theses that relate or refer to Erlang/OTP." It doesn't suggest that all articles have been subjected to peer review and collectively approved by the erlang community (they haven't), or that all articles measure up to some standard of being "impressive" (they don't). Having said this, I think it's a good thing that a discussion about the quality of available articles is called into question. I have no personal stake in this particular article > They are comparing languages for *hardware* and software > concurrent systems. > > - So they don't include Ada, which is a mature language with > excellent > tool support and good support for concurrency and > structuring. (And > was in 2000.) > > - So they include C++, and then find to their great surprise that > it doesn't do concurrency. They _don't_ consider any of the > parallel/distributed versions of C++. A surprising number of people in industry are of the opinion (mainly based on hearsay, I believe) that C++ _is_ a good systems description language - or alternatively that there are programming languages and system description languages (= UML), and no programming languages are particularly better than C++. (One could have extended the paper with some words about why one would want to evaluate a programming language as a system description language, but perhaps that's fodder for an entirely different paper...?) Many researchers, for better and for worse, have a tendency to focus their research on what they think industry wants (UML, Java, C++, ...) > - So they include Haskell, but find to their great surprise that > it doesn't do concurrency. They _don't_ consider > Concurrent Haskell. > (Which I am pretty sure was around in 2000. Certainly Concurrent > Clean, which is pretty Haskell-like, was around then, and really > was concurrent, although recent versions aren't.) In fairness, they do write that their purpose of the evaluation is to "illustrate, how giving high or low importance to a particular aspect affects the relative performance of a language", and: "One motivation for this selection was to cover different paradigms and aspects. Another practical reason was that these languages are well known by the authors." > - They DO try to ensure that they aren't just > reporting their prejudices by writing an > application of the kind they care about in > the several languages, but then they > deliberately choose to say nothing about the > code they got or their experience of writing it. I agree that this was disappointing, and I had difficulty finding a good reason for it, other than that it would have meant more work before publishing the paper. > Their evaluation method basically amounts to making > your preconceived ideas of what kind of solution you > are looking for seem respectable by wrapping > (arbitrary!) numbers around them. Do you have a link to a better method? The evaluation described in the paper is _far_ less arbitrary than many evaluations I've witnessed elsewhere. I think the way these comparisons are handled in industry is sorely lacking. > The other thing I learned was that I have been a > complete fool to myself by waiting until I actually had some > results worth discussing before publishing ideas. > Now I know how to get my publication count high... As much as I would welcome more papers from you, there is also great value in trusting that when certain people _do_ publish, it will be well worth reading. (: BR, Ulf W From rxiong@REDACTED Wed Mar 29 01:30:21 2006 From: rxiong@REDACTED (Renyi Xiong) Date: Tue, 28 Mar 2006 15:30:21 -0800 Subject: About Erlang system nodes References: Message-ID: <000601c652bf$94adb100$5e0fa8c0@HP78819433158> I tried ethereal to intercept packets between 2 erlang nodes over ssl and proved that you guys are right! - the packets were no longer plain text after I applied ssl. Brian, Since erlang has built in security (ssl), we probably don't need any ip tunneling. Thank you very much, Renyi. ----- Original Message ----- From: "Ulf Wiger (AL/EAB)" To: "Renyi Xiong" ; "chandru" ; Cc: Sent: Monday, March 27, 2006 11:52 PM Subject: RE: About Erlang system nodes This seems impossible. There is only one tcp session between two erlang nodes. All communication, be it spawn commands or pure message passing, is passed on the same link. The erlang:send/2 function (the ! operator) is implemented in the virtual machine. The VM knows which port is mapped to a given node, and sends messages through that port. If that port is opened over SSL, all communication between the two nodes will be encrypted. BR, Ulf W > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Renyi Xiong > Sent: den 27 mars 2006 18:44 > To: chandru; tzheng@REDACTED > Cc: erlang-questions@REDACTED > Subject: Re: About Erlang system nodes > > But I found if we run distributed erlang over SSL, it only > affects those distributed command like spawn_link. It doesn't > affect primitive command like message passing command - '!' > which we concern about. Cause that means if we run > distributed Mnesia, it doesn't automatically have encrypted > communication between Mnesia nodes even if SSL is employed. > Is that correct? > > Thanks a lot, > Renyi. > > ----- Original Message ----- > From: "chandru" > To: > Cc: ; "Renyi Xiong" > Sent: Friday, March 24, 2006 1:32 AM > Subject: Re: About Erlang system nodes > > > On 23/03/06, Tony Zheng wrote: > > Hi Chandru > > > > Are there any encrypted mechanisms when Mnesia replicate tables on > > different Erlang nodes? We will put Erlang nodes in > different locations > > on internet, we want to know if it is secure for Mnesia to replicate > > tables on internet. > > Thanks. > > > You can run distributed erlang over SSL. That will encrypt all traffic > between the nodes. > See > http://www.erlang.org/doc/doc-5.4.13/lib/ssl-3.0.11/doc/html/u > sersguide_frame.html > for more info on how to do this. > > cheers > Chandru > From rlenglet@REDACTED Wed Mar 29 08:55:51 2006 From: rlenglet@REDACTED (Romain Lenglet) Date: Wed, 29 Mar 2006 15:55:51 +0900 Subject: Port driver communication witout copy of binaries In-Reply-To: <005b01c652fa$c060f880$0100000a@MONEYMAKER2> References: <200603281833.46399.rlenglet@users.forge.objectweb.org> <200603291229.40339.rlenglet@users.forge.objectweb.org> <005b01c652fa$c060f880$0100000a@MONEYMAKER2> Message-ID: <200603291555.51224.rlenglet@users.forge.objectweb.org> Valentin Micic wrote: > Is there any reason for not using: > > erlang:port_call( Port, Cmd, DATA ) > > which results in call to linked-in driver's: > > int edk_call( ErlDrvData handle, unsigned int cmd, char * buf, > int len, char **rbuf, int rlen, unsigned int * flags ) > > Argument DATA specified in erlang's port_call is passed by > reference. The driver may return data in rbuf. Should there be > more memory required that specified by rlen, such memory may > be allocated, using driver_alloc... no need to free rbuf, as > it points to automatic variable. The problem is that I don't have only one binary to pass as an argument. I need to pass a large binary AND other data along (atoms, integers...), because those must be manipulated on the C side. If I used port_call/3, I would have to build a binary to pass as DATA on the Erlang side, which would imply copying the large binary to create the DATA. I want to avoid that: ... LargeBinaryToNotCopy = <<.....>>, Data = <>, erlang:port_call(Port, Cmd, Data), ... The IO-list-related primitives seem better, as pointed out by Raimo Niskanen, since they do not require to copy the binary: ... LargeBinaryToNotCopy = <<.....>>, Data = [Byte1, Byte2, LargeBinaryToNotCopy, Byte3], erlang:port_command(Port, Data), ... And I have the same problem for return, but IO-list primitives are a solution to my problem. > > V. > > PS > I've been using this approach for integration with some ss7 > adapter... to cut the story short, it's working just fine. > Documentation is indicating that this is the fastest way to > interact with driver. -- Romain LENGLET From joe.armstrong@REDACTED Wed Mar 29 09:08:43 2006 From: joe.armstrong@REDACTED (Joe Armstrong (AL/EAB)) Date: Wed, 29 Mar 2006 09:08:43 +0200 Subject: Defensive programming Message-ID: > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Pupeno > Sent: den 29 mars 2006 00:37 > To: erlang-questions@REDACTED > Subject: Defensive programming > > Hello, > I am used to defensive programming and it's hard for me to > program otherwise. You are getting the heart of the matter :-) > Today I've found this piece of code I wrote some months ago: > > acceptor(tcp, Module, LSocket) -> > case gen_tcp:accept(LSocket) of > {ok, Socket} -> > case Module:start() of > {ok, Pid} -> > ok = gen_tcp:controlling_process(Socket, Pid), > gen_server:cast(Pid, {connected, Socket}), > acceptor(tcp, Module, LSocket); > {error, Error} -> > {stop, {Module, LSocket, Error}} > end; > {error, Reason} -> > {stop, {Module, LSocket, Reason}} > end; 14 lines of complex code - with a doubly indented case clause > > is that too defensive ? should I write it this way > > acceptor(tcp, Module, LSocket) -> > {ok, Socket} = case gen_tcp:accept(LSocket), > {ok, Pid} = Module:start() > ok = gen_tcp:controlling_process(Socket, Pid), > gen_server:cast(Pid, {connected, Socket}), > acceptor(tcp, Module, LSocket); > vs 6 lines of linear code with no conditional structure. How can you ask the question - you KNOW the answer. Six lines of linear code is far better that 14 lines of code with conditional structure. At some level your brain is crying out "the six line version is better" - but this is counter to everything you have ever learnt about "defensive programming" - What you learn about defensive programming came from the rule book for writing sequential programs. Now you need to unlearn this when writing Erlang programs. I just say: "Let it crash" To fully understand this statement you need to understand the underlying philosophy of error handling in Erlang - and also what consequences this philosophy has for how you actually write programs. In what follows I will explain the philosophy and at the end of this show you some code that follows from the philosophy. Error handling in Erlang is based on the idea of "workers" and "observers" where both workers and observers are processes. +-----------+ +-----------+ | Worker |------->---------| Observer | +-----------+ error signal +-----------+ Workers do the jobs - observers watch the workers. If a mistake occurs in the worker, they should just crash. When they crash an error signal is sent to the observer. The workers job is to the work and *nothing else*. The observers job is to correct errors and *nothing else*. This provides clean separation of issues. Note this method of structuring cannot be done in a sequential language - since there is only one thread of control - thus in a sequential language all error handling MUST be done *within* the process itself. That's why you have to program defensively in a sequential language - you get one thread of control and one chance to fix your error. Why did things evolve this way? - the answer has to do with how we program fault-tolerant systems. To make a fault tolerant system you need (at least) TWO computers (think about it :-) - now suppose one computer crashes - how do you fix the fault - one the other computer since THERE IS NO ALTERNATIVE. Now think "what are processes" - one way of thinking about processes is to imagine them as tiny little machines - if this is the case then the error handling should be handled in the same way as with real machines. Why? - you ask. So that there is no semantic mismatch when we model real-world behaviour as sets of processes. If you have N independent things in the real world, you model them with EXACTLY N Erlang processes and you setup error observation channels exactly as they would occur in the real world - as far as possible your program should be isomorphic to the problem - that way the code will virtually "write itself" - deviation from this will lead to a mess. Now the worker-observer error handling model is sufficiency for most simple problems but for complex problems we might imagine building a hierarchical tree of workers and observers where the observers themselves are observed by some other observers - a management tree, as it were. This generalisation is called a "supervision tree" and is one of the standard behaviours in the OTP libraries. Rather than learning how to use the supervision tree (which is overkill for many small applications) the best approach is to use the simplest form of error recovery. A bit of code like this: observer(Fun) -> process_flag(trap_exit, true), Pid = spawn_link(Fun), receive {'EXIT', Pid, Why} -> io:format("worker died with:~p~n",[Why]) end. sets everything up. This process spawn_links Fun (ie spawns it with a link) - the trap exit is needed, because if you don't have it the watching process will die if the spawned process dies. Now you write Fun *with no error handling* - you'll get a few error messages, that are printed out - decide which ones are recoverable and write a bit of error correcting code and you are done. Agggh - there's a gottya here. If you run observer(Fun) in the shell the trap exit command will effect the shell itself - also if the observer dies it might crash the shell (I'm not sure if the current shell spawns, or spawn_links, or applies the arguments it is given. In any case it is good practise not to assume anything about the shell. So if we just define run(Fun) -> spawn(fun() -> observer(fun) end). Then evaluating run(Fun) sets up a worker which evaluates Fun and an observer which prints an error if the worker dies. That's all you need. Type this code in run it and understand it. Then write the worker with no messy error handling code - just "let it crash". Follow these simple rules and your code will be beautiful and easy to understand. " -- Break any of these rules sooner than say anything outright barbarous --" George Orwell Politics and the English Language" (1946) Cheers /Joe From raimo@REDACTED Wed Mar 29 09:19:33 2006 From: raimo@REDACTED (Raimo Niskanen) Date: 29 Mar 2006 09:19:33 +0200 Subject: Port driver communication witout copy of binaries References: , <200603291229.40339.rlenglet@users.forge.objectweb.org>, <005b01c652fa$c060f880$0100000a@MONEYMAKER2> Message-ID: There are two main differences. One making it faster than non-copying port_command, and one making it slower... erlang:port_call( Port, Cmd, DATA ) Converts DATA to the erlang external term format. The driver then has to use conversion functions in the erl_interface and ei C libraries to get to the raw data. The driver also has to provide the return value in erlang external term format. These conversions forces data to be copied in both directions since the external term format is not the same as the internal. The term conversions are to facilitate using port_call/3 to mimic BIFs. The return value is returned to the erlang caller without process scheduling, so there is no delay in the return value transport. Also like BIFs. erlang:port_command(Port, Data) Handles raw byte data. The return value is placed in the caller's inbox (normally), so it has to be fetched using receive in erlang. This is more like send/receive towards a server and hence slower than calling a BIF or port_call/3. So, for implementing a BIF of your own, erlang:port_call/3 is faster. For bulk data transport non-copying erlang:port_command/2 is more efficient. valentin@REDACTED (Valentin Micic) writes: > Is there any reason for not using: > > erlang:port_call( Port, Cmd, DATA ) > > which results in call to linked-in driver's: > > int edk_call( ErlDrvData handle, unsigned int cmd, char * buf, int > len, char **rbuf, int rlen, unsigned int * flags ) > > Argument DATA specified in erlang's port_call is passed by > reference. The driver may return data in rbuf. Should there be more > memory required that specified by rlen, such memory may be allocated, > using driver_alloc... no need to free rbuf, as it points to automatic > variable. > > V. > > PS > I've been using this approach for integration with some ss7 > adapter... to cut the story short, it's working just > fine. Documentation is indicating that this is the fastest way to > interact with driver. > -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From ok@REDACTED Wed Mar 29 09:22:22 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Wed, 29 Mar 2006 19:22:22 +1200 (NZST) Subject: obscure erlang-related publication Message-ID: <200603290722.k2T7MMVH459923@atlas.otago.ac.nz> "Ulf Wiger \(AL/EAB\)" wrote: > Their evaluation method basically amounts to making > your preconceived ideas of what kind of solution you > are looking for seem respectable by wrapping > (arbitrary!) numbers around them. Do you have a link to a better method? The method boils down to the usual "choose a list of features/aspects, assign a crude 'number of stars' rating to each, and add up your scores." The main good point was considering more than one problem situation, so using more than one set of weights. However, the connections between successive tables were not entirely clear, so it wasn't completely clear to me how to adapt their method to my situations. When it comes to things like tool support, I have no idea at all (from the paper) how to derive the entries. In fact some of the entries in some of the tables were a surprise to me. I'm not sure that a good _general_ language evaluation method actually exists. The data gathering would be so costly that only a government could do it. This doesn't mean that I don't think rational choices can be made in specific situations. The evaluation described in the paper is _far_ less arbitrary than many evaluations I've witnessed elsewhere. I think the way these comparisons are handled in industry is sorely lacking. Well, yes. "In the country of the blind, the one-eyed man is king." There _are_ techniques for extracting scales from data like this. (I'm particularly thinking of Multiple Correspondence Analysis.) But they need a lot _more_ data. It would, for example, have been very informative if, instead of reporting "pooled" estimates for each language, each different person's ratings had been shown. As it is, one may reasonably guess that a *lot* of rater disagreement has been hidden from critical eyes. From vlad.xx.dumitrescu@REDACTED Wed Mar 29 09:35:52 2006 From: vlad.xx.dumitrescu@REDACTED (Vlad Dumitrescu XX (LN/EAB)) Date: Wed, 29 Mar 2006 09:35:52 +0200 Subject: Conditional compilation (was: Erlang/OTP R10B-10 has been released) Message-ID: <11498CB7D3FCB54897058DE63BE3897C016B9EE7@esealmw105.eemea.ericsson.se> Hi, > -----Original Message----- > From: Richard A. O'Keefe >> I suppose so, but note that Erlang doesn't (AFAIK) have >> nestable multiline comments, > > I don't know any language that has. Not ones that *WORK*, anyway. > I am, for example, aware of {- -} in Haskell, and I'm also > aware of the trouble they had trying to get them to work, and > that they failed. > I'm also aware of #| |# in Common Lisp, and of the fact that > they don't work either. Is it really that bad? There are languages that have both "regular multiline comments" and "nestable multiline comments", like D, and I took this as they made those to work. I am working on something that requires the same kind of nestability, and took for granted that it's just a matter of making the parser aware of the situation and start/stop tokens sufficiently easy to recognize. Are there any other issues that I should read more about? If yes, do you happen to have any links handy, or Google is my only friend? Best regards, Vlad From ke.han@REDACTED Wed Mar 29 09:51:37 2006 From: ke.han@REDACTED (ke han) Date: Wed, 29 Mar 2006 15:51:37 +0800 Subject: obscure erlang-related publication In-Reply-To: <200603290609.k2T69EWA243201@atlas.otago.ac.nz> References: <200603290609.k2T69EWA243201@atlas.otago.ac.nz> Message-ID: On Mar 29, 2006, at 2:09 PM, Richard A. O'Keefe wrote: > > Basically, the only thing I learned from that paper was "these people > are interested in this topic". No, I tell a lie. The other thing I > learned was that I have been a complete fool to myself by waiting > until > I actually had some results worth discussing before publishing ideas. > Now I know how to get my publication count high... > Yes, research papers are now written more as Sunday paper op-eds. No reason to follow old-school academic publishing standards anymore ;-) ke han From pupeno@REDACTED Wed Mar 29 01:40:45 2006 From: pupeno@REDACTED (Pupeno) Date: Wed, 29 Mar 2006 01:40:45 +0200 Subject: Getting a *more* random value Message-ID: <200603290140.49433.pupeno@pupeno.com> Hello. I have an application where new processes are launched which calculate one random number between 0 and 512 and then do something with that number and terminate. I am currently using random:uniform(513) - 1 but that gives me the same random value for all the processes for each run of the application. I see that consecutive calls would get other numbers but I only need one. How am I supposed to get a number that is not the same on every process ? Thank you. -- Pupeno (http://pupeno.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From ulf.wiger@REDACTED Wed Mar 29 10:04:33 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Wed, 29 Mar 2006 10:04:33 +0200 Subject: Getting a *more* random value Message-ID: Random has a hard-coded default seed. That's why you get the same series for each run. Calling e.g. {I1,I2,I3} = erlang:now(), random:seed(I1,I2,I3). would give you a new series each time. BR, Ulf W > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Pupeno > Sent: den 29 mars 2006 01:41 > To: erlang-questions@REDACTED > Subject: Getting a *more* random value > > Hello. > I have an application where new processes are launched which > calculate one random number between 0 and 512 and then do > something with that number and terminate. > I am currently using random:uniform(513) - 1 but that gives > me the same random value for all the processes for each run > of the application. I see that consecutive calls would get > other numbers but I only need one. > How am I supposed to get a number that is not the same on > every process ? > Thank you. > -- > Pupeno (http://pupeno.com) > From vlad.xx.dumitrescu@REDACTED Wed Mar 29 10:07:54 2006 From: vlad.xx.dumitrescu@REDACTED (Vlad Dumitrescu XX (LN/EAB)) Date: Wed, 29 Mar 2006 10:07:54 +0200 Subject: Conditional compilation (was: Erlang/OTP R10B-10 has been released) Message-ID: <11498CB7D3FCB54897058DE63BE3897C016B9F58@esealmw105.eemea.ericsson.se> > -----Original Message----- > From: Richard A. O'Keefe [mailto:ok@REDACTED] > Comments are sometimes used for commenting out *code*, which follows the lexical > rules of the language. But they are meant for including *text*, which doesn't > follow the lexical rules of the language. The big problem is that comments cannot > tell what kind of content they have. Ah, okay, I see. These were the problems I did see too, I was afraid there was something I had missed. > You can have commenting-out brackets that work if and only if > they are never allowed to contain plain text, so that > token --> number | string | atom | punctuation | comment > comment --> comment_open token* comment_close Yes, my nestable chunks will be code, thus parseable, so I guess it will work -- from this point of view, at least; it may prove unusable from others ;-) Best regards, Vlad From ft@REDACTED Wed Mar 29 10:08:18 2006 From: ft@REDACTED (Fredrik Thulin) Date: Wed, 29 Mar 2006 10:08:18 +0200 Subject: Getting a *more* random value In-Reply-To: <200603290140.49433.pupeno@pupeno.com> References: <200603290140.49433.pupeno@pupeno.com> Message-ID: <200603291008.18879.ft@it.su.se> On Wednesday 29 March 2006 01:40, Pupeno wrote: > Hello. > I have an application where new processes are launched which > calculate one random number between 0 and 512 and then do something > with that number and terminate. > I am currently using random:uniform(513) - 1 but that gives me the > same random value for all the processes for each run of the > application. I see that consecutive calls would get other numbers but > I only need one. How am I supposed to get a number that is not the > same on every process ? Thank you. Seed the random generator with something that isn't the same when you restart your application? 1> {A,B,C} = erlang:now(), random:seed(A, B, C). undefined 2> random:uniform(513). 123 3> restart 1> {A,B,C} = erlang:now(), random:seed(A, B, C). undefined 2> random:uniform(513). 254 3> /Fredrik From ryanobjc@REDACTED Wed Mar 29 10:18:21 2006 From: ryanobjc@REDACTED (Ryan Rawson) Date: Wed, 29 Mar 2006 00:18:21 -0800 Subject: Getting a *more* random value In-Reply-To: <200603290140.49433.pupeno@pupeno.com> References: <200603290140.49433.pupeno@pupeno.com> Message-ID: <78568af10603290018u7e5512ag712d37cd71ab0bc1@mail.gmail.com> You could always bury the first value? Also you could probably seed the pseudo random generator with the results of another pseudo random sequence? Basically just ensure each process has a different PRNG seed and you should be golden. Anyone want to speak to quality of the PNRG implented in uniform? I know the man page cites a journal entry, but I really dont have access to it. Clearly its good enough for generating test cases, no? -ryan On 3/28/06, Pupeno wrote: > Hello. > I have an application where new processes are launched which calculate one > random number between 0 and 512 and then do something with that number and > terminate. > I am currently using random:uniform(513) - 1 but that gives me the same random > value for all the processes for each run of the application. I see that > consecutive calls would get other numbers but I only need one. > How am I supposed to get a number that is not the same on every process ? > Thank you. > -- > Pupeno (http://pupeno.com) > > > From mark@REDACTED Wed Mar 29 10:57:20 2006 From: mark@REDACTED (Mark Lee) Date: Wed, 29 Mar 2006 09:57:20 +0100 Subject: SV: Trouble with gen_server In-Reply-To: References: <20060328132452.GA31790@unhinged.eclipse.co.uk> <20060328135011.GA22019@geneity.co.uk> <17449.17971.194833.242415@antilipe.corelatus.se> <20060328144130.GA22136@geneity.co.uk> Message-ID: <20060329085719.GA24632@geneity.co.uk> Bingo! The bit I've been missing: catch, thanks very much, Mark On Tue, Mar 28, 2006 at 07:36:01PM +0100, Richard Cameron wrote: > > On 28 Mar 2006, at 15:41, mark@REDACTED wrote: > > >Ok, me being daft there. So how can I handle the timeout? I don't want > >the gen_server to die when the call exceeds the timeout, I just > >want to > >be able to reflect that this has happened. > > Have a look at this: > > -module(s). > -compile(export_all). > -define(SERVER,srv). > > start() -> gen_server:start({local,?SERVER}, ?MODULE, [], []). > start_link() -> gen_server:start_link({local,?SERVER}, ?MODULE, [], []). > fail() -> gen_server:call(?SERVER, slowcall, 1). > safe() -> case catch fail() of > {'EXIT', Reason} -> {error, Reason}; > Other -> {ok, Other} > end. > > init([]) -> {ok, []}. > handle_call(slowcall,_,State) -> > receive after 100 -> x end, > {reply, ok, State}. > > > There are a few ways of running it. Here's what you were doing: > > Eshell V5.4.13 (abort with ^G) > 1> s:start_link(). > {ok,<0.37.0>} > 2> s:fail(). > ** exited: {timeout,{gen_server,call,[srv,slowcall,1]}} ** > 3> s:fail(). > ** exited: {noproc,{gen_server,call,[srv,slowcall,1]}} ** > > What's happening here is that you're using the shell to start your > gen_server process. You end up linking the shell process to the > gen_server process which means that if that the shell process dies, > it takes down the gen_server with it. The default shell behaviour is > that if it executes a function which fails, it exits and a new > "replacement shell" process is spawned off to carry on. This is > what's happening here. Your first call to s:fail() does just that - > it fails. The gen_server was fine at that point... it didn't crash > until it got brought down by the shell exiting (because the two > processes are linked). > > Here's another way of running the example: > > 1> s:start(). > {ok,<0.37.0>} > 2> s:fail(). > ** exited: {timeout,{gen_server,call,[srv,slowcall,1]}} ** > 3> s:fail(). > ** exited: {timeout,{gen_server,call,[srv,slowcall,1]}} ** > > Again, the fail() function fails, but this time the shell process > dying doesn't cause the gen_server to terminate. At the end of this > example it's still alive and well and processing requests. > > Another way is this: > > Eshell V5.4.13 (abort with ^G) > 1> s:start_link(). > {ok,<0.37.0>} > 2> s:safe(). > {error,{timeout,{gen_server,call,[srv,slowcall,1]}}} > 3> s:safe(). > {error,{timeout,{gen_server,call,[srv,slowcall,1]}}} > > This time, although the shell process is linked to the gen_server, > I've caught the error from the fail() function and so the shell > didn't exit. Thus, it didn't cause the gen_server to die. > > > All three approaches are equally valid in different situations. Joe > Armstrong's book (and his PhD thesis) gives examples of when you'd > want to link processes together and when you wouldn't. Also, although > there *is* a "catch" keyword in Erlang it's used surprisingly rarely. > Catching all errors like this sometimes isn't actually terribly > helpful, and it's certainly not something the language just makes you > do to be awkward (religiously wrapping every possible things which > could fail with a catch statement to prevent any knock-on effect for > any processes linked to it - although I have seen Java programmers do > things like this). > > You need to have a strategy of which processes should die if a call > to the gen_server doesn't succeed. There could be instances where you > return a "Sorry, system can't cope with the load at the moment" to > the end user and press on regardless. On the other hand you might be > talking about a realtime system where the gen_server can't and > shouldn't take that long to reply. If that's the case then you'd > probably want to ask the gen_server to crash (crashing is a _good_ > thing in Erlang), shutdown cleanly, and then arrange for it to be > restarted (by a supervisor) to get it into a decent state in order to > continue processing requests. This is the Erlang "let it crash" > approach. > > The OTP design principles document is probably the best starting > point for all this stuff: > > http://erlang.se/doc/doc-5.4.12/doc/design_principles/part_frame.html > > Richard. > > !DSPAM:442982a6199691251019417! From samuel@REDACTED Wed Mar 29 13:50:36 2006 From: samuel@REDACTED (Samuel Rivas) Date: Wed, 29 Mar 2006 13:50:36 +0200 Subject: test_server minor bug+patch Message-ID: <20060329115036.GA24823@lambdastream.com> Hi, There is a bug in test_sever that prevents test_sever:call_crash/3 from working properly (or working at all :). I've attached a correcting patch. Regards -- Samuel -------------- next part -------------- --- otp_src_R10B-10/lib/test_server/src/test_server_sup.erl 2006-03-29 13:32:56.000000000 +0200 +++ otp_src_R10B-10.patched/lib/test_server/src/test_server_sup.erl 2006-03-29 13:39:20.000000000 +0200 @@ -128,12 +128,16 @@ exit({wrong_crash_reason,Reason}); {'EXIT',OtherPid,Reason} when OldTrapExit == false -> exit({'EXIT',OtherPid,Reason}) - after trunc(Time) -> + after safe_trunc(Time) -> exit(call_crash_timeout) end, process_flag(trap_exit,OldTrapExit), Answer. +safe_trunc(infinity) -> + infinity; +safe_trunc(N) -> + trunc(N). From luna@REDACTED Wed Mar 29 13:52:14 2006 From: luna@REDACTED (Daniel Luna) Date: Wed, 29 Mar 2006 13:52:14 +0200 (CEST) Subject: Getting a *more* random value In-Reply-To: <200603291008.18879.ft@it.su.se> References: <200603290140.49433.pupeno@pupeno.com> <200603291008.18879.ft@it.su.se> Message-ID: On Wed, 29 Mar 2006, Fredrik Thulin wrote: > On Wednesday 29 March 2006 01:40, Pupeno wrote: >> How am I supposed to get a number that is not the same on every >> process? > > Seed the random generator with something that isn't the same when you > restart your application? If you have some Lego at home, you could try to seed your generator with something like this: http://www.gamesbyemail.com/dicegenerator /Luna -- Daniel Luna | Top reasons that I have a beard: luna@REDACTED | a) Laziness. http://www.update.uu.se/~luna/ | b) I can. Don't look at my homepage (it stinks).| c) I can get away with it. From serge@REDACTED Wed Mar 29 14:31:12 2006 From: serge@REDACTED (Serge Aleynikov) Date: Wed, 29 Mar 2006 07:31:12 -0500 Subject: os_mon & alarm_handler in R10B-10 In-Reply-To: <442A4ADD.10703@erix.ericsson.se> References: <44246D83.2000402@hq.idt.net> <44249B8E.2030105@hq.idt.net> <44278F7A.5070702@erix.ericsson.se> <44282FAC.3090804@hq.idt.net> <4428FD98.8060201@erix.ericsson.se> <44294804.10502@hq.idt.net> <442A4ADD.10703@erix.ericsson.se> Message-ID: <442A7E10.4050804@hq.idt.net> Hi Micael, What keeps bothering me is the fact that if you examine otp_mibs application file, it doesn't mention the need for mnesia: {application, otp_mibs, [..., {applications, [kernel, stdlib, snmp]}, ... ]}. Consequently, release files such as the following: {release, ..., [ {kernel , "2.10.13"}, {stdlib , "1.13.12"}, {otp_mibs, "1.0.4"}, {mnesia , "4.2.5"}, % Mnesia is started in this case {snmp , "4.7.1"} ] }. {release, ..., [ {kernel , "2.10.13"}, {stdlib , "1.13.12"}, {otp_mibs, "1.0.4"}, {snmp , "4.7.1"} ] }. together with the application's config file containing: {snmp, [{agent, [..., {mibs, ["mibs/priv/OTP-MIB"]}, ...]}]} will start by successfully loading OTP-MIB into the SNMP agent's data store. Nonetheless, the later definition of the release file will lead to run-time errors in SNMP manager's queries to OIDs inside the OTP-MIB. While I still think that SNMP agent shouldn't allow to load a mib if a 'new' instrumentation function is defined for an OID, and the 'new' instrumentation call raises an exception, the OTP-MIB's dependency on mnesia should at least be documented to avoid confusion. Regards, Serge Micael Karlberg wrote: > Hi, > > Yeah, I got what you where after. But what I meant was that since > the new (and delete) function(s) are optional, exceptions is to > be expected! And since I don't know how instrumentation functions > are implemented, I cannot know exactly what exceptions I can > expect (function clause, case clause, ...). This together with > the fact that these functions have no defined reply, makes it > simply not worth the effort to try to "handle" the reply. > > Regards, > /BMK > > Serge Aleynikov wrote: > >> Micael Karlberg wrote: >> >>> Hi, >>> >>> The new (and delete) function is an optional one. Also >>> there is no defined return value for this function. >>> Therefor it's not worth the effort to try to figure if >>> the result is ok or not. >> >> >> BTW, I once again reread you email and realized that I should've >> stated this proposal in my former email to make my point more apparent: >> >> _= (catch call_instrumentation(..., new)). >> >> change to: >> >> _ = call_instrumentation(..., new). >> >> The result still doesn't need to be examined, but I believe, the >> exception should not be ignored. >> >> Serge >> >> > From xinumike-sip@REDACTED Wed Mar 29 23:54:03 2006 From: xinumike-sip@REDACTED (xinumike-sip@REDACTED) Date: Wed, 29 Mar 2006 13:54:03 -0800 (PST) Subject: Megaco Message-ID: <20060329215403.89755.qmail@web34305.mail.mud.yahoo.com> Hello, I was just wondering if anybody could answer questions about the simple media gateway controller and the simple media gateway. For example, I bring up the MGC and the MG as outlined in the book. Everybody seems happy. Is it possible to simulate a call? If so, what commands would I type for that? Is there anything else that can be simulated? Thanks a lot! -Mike Dorin From fritchie@REDACTED Thu Mar 30 02:00:02 2006 From: fritchie@REDACTED (Scott Lystig Fritchie) Date: Wed, 29 Mar 2006 18:00:02 -0600 Subject: Mnesia and additional indexes: a cautionary tale Message-ID: <200603300000.k2U0024P029679@snookles.snookles.com> Greetings. I have a cautionary tale to tell about Mnesia and adding an extra attribute index. The story starts with panic (mine!) late last night. I was doing some route performance tests for a Mnesia-based application: simple 1-attribute changes to single records in several tables. Updates for one specific table were 2.5 *orders of magnitude* slower than all others. All of the tables were disc_copies tables. All contained 200K entries. All fit quite comfortably in RAM without pissing off the virtual memory system. It was late, and I didn't want to struggle with remembering how to use "fprof" or "eprof", so I used "cprof". IIRC, "cprof" can profile all Erlang processes without lots of brain power or keystrokes. (It was late, I was tired.) Cprof showed that about close to 2 orders of magnitude fewer VM reductions being executed. Huh. That was not what I wanted to see. Go to sleep, wake up refreshed, then tackle the problem again. Additional profiling is frustrated: no Erlang functions claim the extra time. Perhaps I'm just inept at "fprof" subtlety, somehow omitting the Erlang process that was eating all the CPU time? {shrug} Later in the afternoon, I shutdown Mnesia, then restart it. My application starts timing out at mnesia:wait_for_tables/2. So I start mnesia manually, then go get coffee and make a phone call. When I return 15 minutes later, Mnesia *still* hasn't finished starting up. The "beam" process size should've been about 1,400KB with everything loaded. But the process size was only 390MB, and "beam" was still using 100% CPU time ... doing something, I dunno what! So, I kill the VM and restart. Before starting Mnesia, I use mnesia:set_debug_level(verbose). Sure enough, I see messages like: Mnesia(pss1@REDACTED): Intend to load tables: [{'Tab1',local_only}, {'Tab2',local_only}, {'Tab3',local_only}, {'Tab4',local_only}, ... ] Mnesia(pss1@REDACTED): Mnesia started, 0 seconds Mnesia(pss1@REDACTED): Creating index for 'Tab1' Mnesia(pss1@REDACTED): Creating index for 'Tab2' Mnesia(pss1@REDACTED): Creating index for 'Tab3' Mnesia(pss1@REDACTED): Creating index for 'Tab3' ... and it hangs there, eating 100% CPU and getting no further. A quick edit to mnesia_index.erl to include the attribute position number shows me this instead: Mnesia(pss1@REDACTED): Creating index for 'Tab1' Pos 3 Mnesia(pss1@REDACTED): Creating index for 'Tab2' Pos 7 Mnesia(pss1@REDACTED): Creating index for 'Tab3' Pos 3 Mnesia(pss1@REDACTED): Creating index for 'Tab3' Pos 5 Ah! Suddenly, it becomes very, very clear. The table 'Tab3' contains 200K of debugging/development records. When the code to create those records was first written, the attribute at position #5 was a constant binary term. Then "feature creep" happened, and an extra Mnesia index was created for position #5. At the 200K records were added slowly, no one noticed the performance impact using the exact same term for position #5 ... until I did, last last night. Moral of the story for Mnesia users (and other databases, I'm sure): beware of the impact of adding secondary indexes. For the Mnesia dev team, I have two questions: 1. That change to mnesia_index.erl is awfully handy ... though unfortunately only handy when the Mnesia debug level is changed from the default. 2. What are the odds that a future release could have less evil behavior (less than O(N^2), taking a wild guess) with secondary indexes like my (unfortunate, pilot error!) story? -Scott From ok@REDACTED Thu Mar 30 06:22:21 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Thu, 30 Mar 2006 16:22:21 +1200 (NZST) Subject: Conditional compilation (was: Erlang/OTP R10B-10 has been released) Message-ID: <200603300422.k2U4MLM3488372@atlas.otago.ac.nz> "Vlad Dumitrescu XX \(LN/EAB\)" asked: Is it really that bad? I received another copy earlier as private mail, so answered it privately. Briefly, the answer is "yes it IS that bad". Basically, - comments that are designed to wrap around code cannot handle free text. (Back in the days of UNIX Version 7 on a PDP-11, I used to use #if 0 lots of comment stuff here #endif 0 as "big block comments". However, under the rules of ANSI C89 that just does not work: the stuff inside #if/#endif *must* be made up of well formed preprocessing-tokens.) - comments that are designed to handle free text (constrained perhaps so that comment brackets balance) cannot handle code. In short, you need TWO kinds of comments. token -> constant | identifier | keyword | punctuation | space space -> white space character | end of line comment | text comment | code comment text comment -> "(*" ("not (*" | "not *)" | ok char | text comment)* "*)" code comment -> "[*" token* "*]" You can comment code out by wrapping code comment brackets around it, but you had better be darned sure NOT to use text comment brackets. You can include free text by wrapping text comment brackets around it, but you had better be darned sure NOT to use code comment brackets. Because text and code follow DIFFERENT lexical rules. If there is a language that has two kinds of comment like that, then yes, nesting comments work in it. (PROVIDED you always use the right kind!) All the languages I know that claim to have nesting comments try to use text comments for both jobs, and that's what doesn't Work. (Because text comments do not know about string and atom tokens.) From rvg@REDACTED Thu Mar 30 07:38:49 2006 From: rvg@REDACTED (Rudolph van Graan) Date: Thu, 30 Mar 2006 07:38:49 +0200 Subject: Mnesia and additional indexes: a cautionary tale In-Reply-To: <200603300000.k2U0024P029679@snookles.snookles.com> References: <200603300000.k2U0024P029679@snookles.snookles.com> Message-ID: Hi Scott, > Later in the afternoon, I shutdown Mnesia, then restart it. My > application starts timing out at mnesia:wait_for_tables/2. So I start > mnesia manually, then go get coffee and make a phone call. When I > return 15 minutes later, Mnesia *still* hasn't finished starting up. I have seen similar things happen in production systems. In our case, the database was about 700Mb in size, roughly split between three tables of similar size. When the system gets to this point, we always have to clean the tables and dump the data (not nice, but acceptable in this system) as restoring from backups will only delay the problem from reoccurring in a couple of hours. > The "beam" process size should've been about 1,400KB with everything > loaded. But the process size was only 390MB, and "beam" was still > using 100% CPU time ... doing something, I dunno what! When this happens, it is mostly DETS related processes running doing... what? If you kill the VM when in this state, I've seen the following happen: =ERROR REPORT==== 29-Dec-2005::16:12:01 === Mnesia(vm1@REDACTED): ** ERROR ** (core dumped to file: "d:/Mnesi aCore. vm1@REDACTED") ** FATAL ** {error,{"Cannot open dets table", ssycpy, [{file,"d:/db/ssycpy.DAT"}, {keypos,2}, {repair,true}, {type,set}], {bad_freelists,"d:/db/ssycpy.DAT"}}} =ERROR REPORT==== 29-Dec-2005::16:12:11 === Mnesia(vm1@REDACTED): ** ERROR ** mnesia_event got unexpected event: {'EXIT', <0.39. 0>, killed } =INFO REPORT==== 29-Dec-2005::16:12:11 === application: mnesia exited: {killed,{mnesia_sup,start,[normal,[]]}} type: permanent {"Kernel pid terminated",application_controller,shutdown} Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) (shutdown) I tried figuring out what this means in the mnesia code, but gave up. > > The table 'Tab3' contains 200K of debugging/development records. When > the code to create those records was first written, the attribute at > position #5 was a constant binary term. Do you mean constant as in it will never change after writing the first time or constant as in all of them the same? > Then "feature creep" happened, and an extra Mnesia index was created > for position #5. At the 200K records were added slowly, no one > noticed the performance impact using the exact same term for position > #5 ... until I did, last last night. I guess you mean all the entries have the same term in position 5. So in essence you suggest this happens when a lot of records contain the same value in an indexed field? In our system, I have slowly started to lose trust in mnesia and currently our design policy excludes using mnesia for tables that "grow", i.e. new records being added. The issue is that I cannot easily describe the symptoms - is it because the DETS table got corrupted due to a dirty shutdown of the machine? Is it the fact that we use large tables for which recovery is difficult? This is the first time that I saw anyone with similar symptoms. Last question - is this running Windows as host OS? I have not seen this happen on Mac OS or Linux, but that might be incidental. Rudolph From dgud@REDACTED Thu Mar 30 09:44:47 2006 From: dgud@REDACTED (Dan Gudmundsson) Date: Thu, 30 Mar 2006 09:44:47 +0200 Subject: Mnesia and additional indexes: a cautionary tale In-Reply-To: <200603300000.k2U0024P029679@snookles.snookles.com> References: <200603300000.k2U0024P029679@snookles.snookles.com> Message-ID: <17451.35951.812604.135926@rian.du.uab.ericsson.se> Mnesia indecies are implemented with an additional [d]ets _BAG_ table per index, which have the secondary index as a key and the value is the key in the real table. Insertion time in ets bag tables are linear, and have to be that way mnesia relies on the insertion order (in other parts). You are not the first person to have made that mistake I can assure you :-) The others have most often done it on test systems, though, then they come and complain about mnesia's lousy insertion performance.. Maybe I should add something more about it in the manual... /Dan Scott Lystig Fritchie writes: > Greetings. I have a cautionary tale to tell about Mnesia and adding > an extra attribute index. > > The story starts with panic (mine!) late last night. I was doing some > route performance tests for a Mnesia-based application: simple > 1-attribute changes to single records in several tables. Updates for > one specific table were 2.5 *orders of magnitude* slower than all > others. > > All of the tables were disc_copies tables. All contained 200K > entries. All fit quite comfortably in RAM without pissing off the > virtual memory system. > > It was late, and I didn't want to struggle with remembering how to use > "fprof" or "eprof", so I used "cprof". IIRC, "cprof" can profile all > Erlang processes without lots of brain power or keystrokes. (It was > late, I was tired.) Cprof showed that about close to 2 orders of > magnitude fewer VM reductions being executed. Huh. That was not what > I wanted to see. > > Go to sleep, wake up refreshed, then tackle the problem again. > Additional profiling is frustrated: no Erlang functions claim the > extra time. Perhaps I'm just inept at "fprof" subtlety, somehow > omitting the Erlang process that was eating all the CPU time? {shrug} > > Later in the afternoon, I shutdown Mnesia, then restart it. My > application starts timing out at mnesia:wait_for_tables/2. So I start > mnesia manually, then go get coffee and make a phone call. When I > return 15 minutes later, Mnesia *still* hasn't finished starting up. > > The "beam" process size should've been about 1,400KB with everything > loaded. But the process size was only 390MB, and "beam" was still > using 100% CPU time ... doing something, I dunno what! > > So, I kill the VM and restart. Before starting Mnesia, I use > mnesia:set_debug_level(verbose). Sure enough, I see messages like: > > Mnesia(pss1@REDACTED): Intend to load tables: [{'Tab1',local_only}, > {'Tab2',local_only}, > {'Tab3',local_only}, > {'Tab4',local_only}, > ... > ] > Mnesia(pss1@REDACTED): Mnesia started, 0 seconds > Mnesia(pss1@REDACTED): Creating index for 'Tab1' > Mnesia(pss1@REDACTED): Creating index for 'Tab2' > Mnesia(pss1@REDACTED): Creating index for 'Tab3' > Mnesia(pss1@REDACTED): Creating index for 'Tab3' > > ... and it hangs there, eating 100% CPU and getting no further. > > A quick edit to mnesia_index.erl to include the attribute position > number shows me this instead: > > Mnesia(pss1@REDACTED): Creating index for 'Tab1' Pos 3 > Mnesia(pss1@REDACTED): Creating index for 'Tab2' Pos 7 > Mnesia(pss1@REDACTED): Creating index for 'Tab3' Pos 3 > Mnesia(pss1@REDACTED): Creating index for 'Tab3' Pos 5 > > Ah! Suddenly, it becomes very, very clear. > > The table 'Tab3' contains 200K of debugging/development records. When > the code to create those records was first written, the attribute at > position #5 was a constant binary term. > > Then "feature creep" happened, and an extra Mnesia index was created > for position #5. At the 200K records were added slowly, no one > noticed the performance impact using the exact same term for position > #5 ... until I did, last last night. > > Moral of the story for Mnesia users (and other databases, I'm sure): > beware of the impact of adding secondary indexes. > > For the Mnesia dev team, I have two questions: > > 1. That change to mnesia_index.erl is awfully handy ... though > unfortunately only handy when the Mnesia debug level is changed > from the default. > > 2. What are the odds that a future release could have less evil > behavior (less than O(N^2), taking a wild guess) with secondary > indexes like my (unfortunate, pilot error!) story? > > -Scott From ulf.wiger@REDACTED Thu Mar 30 10:23:07 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Thu, 30 Mar 2006 10:23:07 +0200 Subject: Mnesia and additional indexes: a cautionary tale Message-ID: Enter the 'rdbms' contrib... I see a couple of traits of the additional indexing support in rdbms that could help in this particular situation: - You can have disc_copy indexes, which are not rebuilt every time mnesia is restarted - You can have ordered indexes, which don't have linear insertion complexity. They are ordered_set tables, where the key is {IndexValue, Oid}. Still waiting for some feedback as to whether this either sucks badly or actually helps ... (: /Ulf W > -----Original Message----- > From: owner-erlang-questions@REDACTED > [mailto:owner-erlang-questions@REDACTED] On Behalf Of Dan > Gudmundsson > Sent: den 30 mars 2006 09:45 > To: Scott Lystig Fritchie > Cc: erlang-questions@REDACTED > Subject: Mnesia and additional indexes: a cautionary tale > > > Mnesia indecies are implemented with an additional [d]ets > _BAG_ table per index, which have the secondary index as a > key and the value is the key in the real table. > > Insertion time in ets bag tables are linear, and have to be > that way mnesia relies on the insertion order (in other parts). > > You are not the first person to have made that mistake I can > assure you :-) > > The others have most often done it on test systems, though, > then they come and complain about mnesia's lousy insertion > performance.. > Maybe I should add something more about it in the manual... > > /Dan > > Scott Lystig Fritchie writes: > > Greetings. I have a cautionary tale to tell about Mnesia > and adding > an extra attribute index. > > > > The story starts with panic (mine!) late last night. I > was doing some > route performance tests for a Mnesia-based > application: simple > 1-attribute changes to single records > in several tables. Updates for > one specific table were > 2.5 *orders of magnitude* slower than all > others. > > > > All of the tables were disc_copies tables. All contained > 200K > entries. All fit quite comfortably in RAM without > pissing off the > virtual memory system. > > > > It was late, and I didn't want to struggle with > remembering how to use > "fprof" or "eprof", so I used > "cprof". IIRC, "cprof" can profile all > Erlang processes > without lots of brain power or keystrokes. (It was > late, > I was tired.) Cprof showed that about close to 2 orders of > > magnitude fewer VM reductions being executed. Huh. That > was not what > I wanted to see. > > > > Go to sleep, wake up refreshed, then tackle the problem again. > > Additional profiling is frustrated: no Erlang functions > claim the > extra time. Perhaps I'm just inept at "fprof" > subtlety, somehow > omitting the Erlang process that was > eating all the CPU time? {shrug} > > Later in the > afternoon, I shutdown Mnesia, then restart it. My > > application starts timing out at mnesia:wait_for_tables/2. > So I start > mnesia manually, then go get coffee and make a > phone call. When I > return 15 minutes later, Mnesia > *still* hasn't finished starting up. > > > > The "beam" process size should've been about 1,400KB with > everything > loaded. But the process size was only 390MB, > and "beam" was still > using 100% CPU time ... doing > something, I dunno what! > > > > So, I kill the VM and restart. Before starting Mnesia, I > use > mnesia:set_debug_level(verbose). Sure enough, I see > messages like: > > > > Mnesia(pss1@REDACTED): Intend to load tables: > [{'Tab1',local_only}, > > > {'Tab2',local_only}, > > > {'Tab3',local_only}, > > > {'Tab4',local_only}, > > ... > > ] > > Mnesia(pss1@REDACTED): Mnesia started, 0 seconds > > Mnesia(pss1@REDACTED): Creating index for 'Tab1' > > Mnesia(pss1@REDACTED): Creating index for 'Tab2' > > Mnesia(pss1@REDACTED): Creating index for 'Tab3' > > Mnesia(pss1@REDACTED): Creating index for 'Tab3' > > > > ... and it hangs there, eating 100% CPU and getting no further. > > > > A quick edit to mnesia_index.erl to include the attribute > position > number shows me this instead: > > > > Mnesia(pss1@REDACTED): Creating index for 'Tab1' Pos 3 > > Mnesia(pss1@REDACTED): Creating index for 'Tab2' Pos 7 > > Mnesia(pss1@REDACTED): Creating index for 'Tab3' Pos 3 > > Mnesia(pss1@REDACTED): Creating index for 'Tab3' Pos 5 > > > > Ah! Suddenly, it becomes very, very clear. > > > > The table 'Tab3' contains 200K of debugging/development > records. When > the code to create those records was first > written, the attribute at > position #5 was a constant binary term. > > > > Then "feature creep" happened, and an extra Mnesia index > was created > for position #5. At the 200K records were > added slowly, no one > noticed the performance impact using > the exact same term for position > #5 ... until I did, last > last night. > > > > Moral of the story for Mnesia users (and other databases, > I'm sure): > > beware of the impact of adding secondary indexes. > > > > For the Mnesia dev team, I have two questions: > > > > 1. That change to mnesia_index.erl is awfully handy ... though > > unfortunately only handy when the Mnesia debug level is changed > > from the default. > > > > 2. What are the odds that a future release could have less evil > > behavior (less than O(N^2), taking a wild guess) with secondary > > indexes like my (unfortunate, pilot error!) story? > > > > -Scott > > > From samuel@REDACTED Thu Mar 30 10:32:32 2006 From: samuel@REDACTED (Samuel Rivas) Date: Thu, 30 Mar 2006 10:32:32 +0200 Subject: Defensive programming In-Reply-To: References: Message-ID: <20060330083231.GA10541@lambdastream.com> Joe Armstrong (AL/EAB) wrote: > > Today I've found this piece of code I wrote some months ago: > > > > acceptor(tcp, Module, LSocket) -> > > case gen_tcp:accept(LSocket) of > > {ok, Socket} -> > > case Module:start() of > > {ok, Pid} -> > > ok = gen_tcp:controlling_process(Socket, Pid), > > gen_server:cast(Pid, {connected, Socket}), > > acceptor(tcp, Module, LSocket); > > {error, Error} -> > > {stop, {Module, LSocket, Error}} > > end; > > {error, Reason} -> > > {stop, {Module, LSocket, Reason}} > > end; > > 14 lines of complex code - with a doubly indented case clause > > > > > is that too defensive ? should I write it this way > > > > acceptor(tcp, Module, LSocket) -> > > {ok, Socket} = case gen_tcp:accept(LSocket), > > {ok, Pid} = Module:start() > > ok = gen_tcp:controlling_process(Socket, Pid), > > gen_server:cast(Pid, {connected, Socket}), > > acceptor(tcp, Module, LSocket); > > > > vs 6 lines of linear code with no conditional structure. > > How can you ask the question - you KNOW the answer. > > Six lines of linear code is far better that 14 lines of code with > conditional structure. > > [ ... snip ...] > > I just say: > > "Let it crash" > > [ ... snip ...] > > Error handling in Erlang is based on the idea of "workers" and > "observers" > where both workers and observers are processes. > > +-----------+ +-----------+ > | Worker |------->---------| Observer | > +-----------+ error signal +-----------+ > I have a doubt about that: In my view, the former piece of code is _almost_ fine. It's fine because on absence on errors it does exactly what you want; no more, no less. I said "almost" because on errors it does not behave exactly as I'd like. Elaboration: the error the observer gets is not the actual error, but a "badmatch". Thus, I usually wrap OTPish functions like that: -module(foo) bar(Anything, AnotherThing) -> case an_otp_lib:bar(Anything, AnotherThing) of {ok, Value} -> Value; {error, Reason} -> erlang:error(Reason) end. Now, my code would read almost as proposed (getting rid of the {ok,Result} matches) and the observer can try-catch errors if needed, obtaining the actual exit reason. This way, functions always return a valid term, failing otherwise. However, that comes with some problems: - I have to wrap a lot of things - I can not easily distinguish expected errors from bugs (badmathces, undefs, etc) any more, since they fall in the same catch category. - I feel going against the tide since supervisors and gen_servers are not particulary happy with this philosophy (they stubbornly catch error signals on startups and return {error, Reason} tuples; among other nuisances). I can solve the second point changing erlang:error() with exit() thus catching exit signals, but it leaves a dirty feeling (I don't want to exit really, I want to signal an error). The other two are solved with a fair amount of adapting code. For example, I build my gen_servers using these utility functions: -module(foo_lib). server_call(Server, Request) -> case gen_server:call(Server, Request) of {error, Reason} -> exit(Reason); Value -> Value end. secure_call(Request, From, State, Module, CallFunc) -> try Module:CallFunc(Request, From, State) of Result when is_tuple(Result), size(Result) == 3 -> Result; _ -> %% Pretty defensive against annoying bugs erlang:error(bad_call_return) catch error:Reason -> {reply, {error, Reason}, State}; end. So in the server module, handle_call is implemented like this: handle_call(Request, From, State) -> foo_lib:secure_call(Request, From, State, ?MODULE, handle_call2). handle_call2({bar, Args}, _From, State) -> {Reply, NewState} = handle_bar(Args), {reply, Reply, NewState}; I wonder whether this "make it crash (TM)" approach is somewhat overkill. Regards -- Samuel From chandrashekhar.mullaparthi@REDACTED Thu Mar 30 10:48:15 2006 From: chandrashekhar.mullaparthi@REDACTED (chandru) Date: Thu, 30 Mar 2006 09:48:15 +0100 Subject: Mnesia and additional indexes: a cautionary tale In-Reply-To: References: Message-ID: On 30/03/06, Ulf Wiger (AL/EAB) wrote: > > > Enter the 'rdbms' contrib... > > I see a couple of traits of the additional indexing > support in rdbms that could help in this > particular situation: > > - You can have disc_copy indexes, which are not rebuilt > every time mnesia is restarted Is the case where mnesia recovers from a partitioned network taken care of? The index tables will have to be rebuilt in this case. I'm sure you would've but I thought I'll ask anyway. cheers Chandru PS: haven't used rdbms (yet) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulf.wiger@REDACTED Thu Mar 30 11:23:31 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Thu, 30 Mar 2006 11:23:31 +0200 Subject: Mnesia and additional indexes: a cautionary tale Message-ID: The indexes in rdbms are first-class tables, and all index updates are performed within a transaction context. Thus, solving the partitioned network problem will be done in roughly the same way (I assume you mean setting master nodes?) There is (more or less) also a rebuild_indexes(Tab) function, using a mnesia_recover. It's mainly used for adding indexes to existing tables and populating them within the scope of a schema transaction. One can also, of course, break the abstraction and simply clobber the rdbms indexes and rebuild them by hand (by bypassing the wrapper functions in rdbms). I could make this a bit simpler and clearly documented, if it's deemed necessary. BR, Ulf W ________________________________ From: chandru [mailto:chandrashekhar.mullaparthi@REDACTED] Sent: den 30 mars 2006 10:48 To: Ulf Wiger (AL/EAB) Cc: erlang-questions@REDACTED Subject: Re: Mnesia and additional indexes: a cautionary tale On 30/03/06, Ulf Wiger (AL/EAB) wrote: Enter the 'rdbms' contrib... I see a couple of traits of the additional indexing support in rdbms that could help in this particular situation: - You can have disc_copy indexes, which are not rebuilt every time mnesia is restarted Is the case where mnesia recovers from a partitioned network taken care of? The index tables will have to be rebuilt in this case. I'm sure you would've but I thought I'll ask anyway. cheers Chandru PS: haven't used rdbms (yet) -------------- next part -------------- An HTML attachment was scrubbed... URL: From xlcr@REDACTED Thu Mar 30 13:22:26 2006 From: xlcr@REDACTED (Nick Linker) Date: Thu, 30 Mar 2006 18:22:26 +0700 Subject: Newbie questions In-Reply-To: <55E3D4E7-D5BC-4730-BA3E-8246DDBDBC40@rogvall.com> References: <200603250847.k2P8lBfH007162@spikklubban.it.uu.se> <44276DD2.2070503@mail.ru> <55E3D4E7-D5BC-4730-BA3E-8246DDBDBC40@rogvall.com> Message-ID: <442BBF72.6020502@mail.ru> Tony Rogvall wrote: >> Good solution :-) >> Now I also have different idea without using recursion. It is based >> on the following equation >> [F_n ] [1 1] [F_n-1] >> [ ] = [ ] * [ ] >> [F_n-1] [1 0] [F_n-2] >> And we just have to calculate k-th power of the matrix [[1,1], >> [1,0]]. It is possible to do within O(log(k)). >> > > Here is my contrib (written many years ago, it may work :-) > > ffib(N) -> > {UN1,_,_,_} = lucas_numbers(1,-1,N-1), > UN1. > > lucas_numbers(P, Q, M) -> > m_mult(exp({P,-Q,1,0}, M), {1,P,0,2}). > > exp(A, N) when tuple(A), size(A)==4, is_integer(N), N > 0 -> > m_exp(A, N, {1,0,0,1}). > > m_exp(A, 1, P) -> > m_mult(A,P); > m_exp(A, N, P) when ?even(N) -> > m_exp(m_sqr(A), N div 2, P); > m_exp(A, N, P) -> > m_exp(m_sqr(A), N div 2, m_mult(A, P)). > > m_mult({A11,A12,A21,A22}, {B11,B12,B21,B22}) -> > { A11*B11 + A12*B21, A11*B12 + A12*B22, > A21*B11 + A22*B21, A21*B12 + A22*B22 }. > > m_sqr({A,B,C,D}) -> > BC = B*C, > A_D = A + D, > { A*A+BC, B*A_D, C*A_D, D*D + BC }. > > > /Tony Cool. I guess the problem of computing Fibonacci is finally closed. Thanks. Best regards, Linker Nick. From xlcr@REDACTED Thu Mar 30 13:30:26 2006 From: xlcr@REDACTED (Nick Linker) Date: Thu, 30 Mar 2006 18:30:26 +0700 Subject: Newbie questions In-Reply-To: References: Message-ID: <442BC152.6040904@mail.ru> Joe Armstrong (AL/EAB) wrote: > 2) - how many (decimal) digits are there in fib(N) > > 2) is much more difficult - you could or course just compute fib(N) > then take the log - > but this is cheating - so can you compute the number of decomal > digits in fib(N) without > actually going to the trouble of computing fib(N) - now this might be > easy but it certainly is not > obvious... > > /Joe I'm sorry, but I meant quite different question: given a big number N, how to compute the number of digits? There is obvious solution: length(integer_to_list(N)), but it is not very efficient. I wanted to have a bit faster solution... Ok, nevermind. Best regards, Linker Nick. From raimo@REDACTED Thu Mar 30 15:01:38 2006 From: raimo@REDACTED (Raimo Niskanen) Date: 30 Mar 2006 15:01:38 +0200 Subject: Newbie questions References: <200603250847.k2P8lBfH007162@spikklubban.it.uu.se>, <44276DD2.2070503@mail.ru> Message-ID: Regarding your problem about determining the number of decimal digits in a number, I just came to think of a simple enough brute force O(log(N)) or rather O(NN) where NN is the number of digits in the number: digits(N) when is_integer(N), N >= 0 -> digits(N, 1, 0). digits(N, M, D) when M > N -> D; digits(N, M, D) -> digits(N, M*10, D+1). Or did I misunderstand what you ment by O(log(N)), was N the number of decimal digits in the number? xlcr@REDACTED (Nick Linker) writes: > Kostis Sagonas wrote: > > >Nick Linter wrote: > > > >An example would have helped us understand better what the issue is. > >Currently, I get: > > > >Eshell V5.5 (abort with ^G) > >1> math:log(...). > >480.407 > > > Well, I get the following result: > > 43> math:log10(test:fib(1476)). > 308.116 > 44> math:log10(test:fib(1477)). > =ERROR REPORT==== 27-Mar-2006::10:57:47 === > Error in process <0.181.0> with exit value: > {badarith,[{math,log10,[16#00012D269 > C3FA9D767CB55B0DDF8E6A2DE7B1D967FF8D0BE61EB16ACD02D1A53C95A370ABD95285998D226919 > > D95DCA54298D92C348C27E635E1690E7858060F0DC14E885F0217413C55A1F820D6EB051F87C7C73 > > 818AC23E4A9A00A2072C08C6697A2FAD66FC7DEBEEB7A5F582D7639A431B9C99CEC6315A9ED1C652 > > A81A6B59A39]},{erl_eval,do_apply,5},{shell,exprs,6},{shell,eval_loop,3}]} > > ** exited: {badarith,[{math,log10, > [211475298697902185255785861961179135570552502746803 > 25217495622655863402432394766663713782393252439761186467156621190833026337742520 > > 45520741882086869936691237540043402509431087092122991804222930097654049305082429 > > 75773774612140021599477983006713536106549441161323499077298115887067363710153036 > > 315849480388057657]}, > {erl_eval,do_apply,5}, > {shell,exprs,6}, > {shell,eval_loop,3}]} ** > > My system is Windows XP, Erlang R10B. > > >fib(N) -> > > trunc((1/sqrt(5)) * (pow(((1+sqrt(5))/2),N) - pow(((1-sqrt(5))/2),N))). > > > Good solution :-) > Now I also have different idea without using recursion. It is based on > the following equation > [F_n ] [1 1] [F_n-1] > [ ] = [ ] * [ ] > [F_n-1] [1 0] [F_n-2] > And we just have to calculate k-th power of the matrix > [[1,1],[1,0]]. It is possible to do within O(log(k)). > > >PS. The performance you are experiencing in your version of fib/1 > > has nothing to do with tail recursion... > > > Yes, exponential number of recursive calls... Thank you. > > I'm sorry, but most interesting question (2nd, about number of digits > in an integer) remains unanswered. But I guess it is very rare problem > with numbers, so if there is no answer, I will understand. > > Best regards, > Linker Nick. -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From matthias@REDACTED Thu Mar 30 16:05:52 2006 From: matthias@REDACTED (Matthias Lang) Date: Thu, 30 Mar 2006 16:05:52 +0200 Subject: Newbie questions In-Reply-To: References: <200603250847.k2P8lBfH007162@spikklubban.it.uu.se> <44276DD2.2070503@mail.ru> Message-ID: <17451.58816.442837.749177@antilipe.corelatus.se> Raimo Niskanen writes: > Regarding your problem about determining the number of decimal > digits in a number, I just came to think of a simple enough > brute force O(log(N)) or rather O(NN) where NN is the > number of digits in the number: > > digits(N) when is_integer(N), N >= 0 -> digits(N, 1, 0). > > digits(N, M, D) when M > N -> D; > digits(N, M, D) -> digits(N, M*10, D+1). I haven't ever studied bignum implementation, neither for Erlang or in general, but I don't think this solution is O(log N). I would expect M*10 to be expensive, i.e. O(M). And I'm not too sure about the cost of the "M > N" test. Matthias From matthias@REDACTED Thu Mar 30 17:51:59 2006 From: matthias@REDACTED (Matthias Lang) Date: Thu, 30 Mar 2006 17:51:59 +0200 Subject: Newbie questions In-Reply-To: <17451.58816.442837.749177@antilipe.corelatus.se> References: <200603250847.k2P8lBfH007162@spikklubban.it.uu.se> <44276DD2.2070503@mail.ru> <17451.58816.442837.749177@antilipe.corelatus.se> Message-ID: <17451.65183.343431.934204@antilipe.corelatus.se> Matthias Lang writes: > > Regarding your problem about determining the number of decimal > > digits in a number, I just came to think of a simple enough > > brute force O(log(N)) or rather O(NN) where NN is the > > number of digits in the number: > > > > digits(N) when is_integer(N), N >= 0 -> digits(N, 1, 0). > > > > digits(N, M, D) when M > N -> D; > > digits(N, M, D) -> digits(N, M*10, D+1). > > I haven't ever studied bignum implementation, neither for Erlang or in > general, but I don't think this solution is O(log N). > > I would expect M*10 to be expensive, i.e. O(M). And I'm not too sure > about the cost of the "M > N" test. Er, that's wrong too. Bignum multiplication must be implemented as some sort of shift-and-add. Which would make it O(log M), not O(M) as I said. So I expect there to be log M multiplications, each costing log M. That makes the whole thing O(log N * log N). Matthias From raimo@REDACTED Thu Mar 30 18:03:48 2006 From: raimo@REDACTED (Raimo Niskanen) Date: 30 Mar 2006 18:03:48 +0200 Subject: Newbie questions References: <44276DD2.2070503@mail.ru>, , <17451.58816.442837.749177@antilipe.corelatus.se> Message-ID: You are right! I did not think about that. But bignum multiply by smallnum would require O(MM) multiplications plus O(MM) additions with carry where MM is the number of bignum digits (halfwords in the erlang implementation) in the bignum. So the bignum multiplication would be O(log(M)) itself. The comparision would be like a bignum addition which I guess also would be O(MM). Would that result in O(MM*MM) => O(log(M)*log(M))? Note: M > N could be optimized by first checking sign and then comparing number of bignum digits followed by comparision on highest bignum digit, which would be O(1) unless they differ in worst case the lowest digit which would be O(MM). This since bignums are represented as a flexible number of bignum digits (halfwords) of which the highest must not be zero... ...Now I checked the code, that was the implementation! matthias@REDACTED (Matthias Lang) writes: > Raimo Niskanen writes: > > Regarding your problem about determining the number of decimal > > digits in a number, I just came to think of a simple enough > > brute force O(log(N)) or rather O(NN) where NN is the > > number of digits in the number: > > > > digits(N) when is_integer(N), N >= 0 -> digits(N, 1, 0). > > > > digits(N, M, D) when M > N -> D; > > digits(N, M, D) -> digits(N, M*10, D+1). > > I haven't ever studied bignum implementation, neither for Erlang or in > general, but I don't think this solution is O(log N). > > I would expect M*10 to be expensive, i.e. O(M). And I'm not too sure > about the cost of the "M > N" test. > > Matthias -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From fritchie@REDACTED Thu Mar 30 22:25:29 2006 From: fritchie@REDACTED (Scott Lystig Fritchie) Date: Thu, 30 Mar 2006 14:25:29 -0600 Subject: Mnesia and additional indexes: a cautionary tale In-Reply-To: Message of "Thu, 30 Mar 2006 07:38:49 +0200." Message-ID: <200603302025.k2UKPTFh076091@snookles.snookles.com> >>>>> "rvg" == Rudolph van Graan writes: >> The "beam" process size should've been about 1,400KB with >> everything loaded. But the process size was only 390MB, and "beam" >> was still using 100% CPU time ... doing something, I dunno what! rvg> When this happens, it is mostly DETS related processes running rvg> doing... what? The tables are disc_copies, so DETS isn't involved. But as another followup explained, the Mnesia secondary index is implemented as a bag, and insertion time is linear. rvg> I guess you mean all the entries have the same term in position rvg> 5. rvg> So in essence you suggest this happens when a lot of records rvg> contain the same value in an indexed field? Correct. Doing that linear insert 200K times can officially be called "slow", as far as I'm concerned. :-) It's nice to know that I'm not the only person who's had that problem ... for some value of "nice". rvg> Last question - is this running Windows as host OS? Nope, Linux. Though I'd expect this particular problem to bite Erlang on any platform. -Scott From ok@REDACTED Fri Mar 31 01:06:28 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Fri, 31 Mar 2006 11:06:28 +1200 (NZST) Subject: Newbie questions Message-ID: <200603302306.k2UN6SeE495849@atlas.otago.ac.nz> Nick Linker wrote: I'm sorry, but I meant quite different question: given a big number N, how to compute the number of digits? There is obvious solution: length(integer_to_list(N)), but it is not very efficient. I wanted to have a bit faster solution... Ok, nevermind. I'm aware of (and have used) several programming languages providing as-long-as-necessary integer arithmetic. Without consulting manuals (everything is being moved out of my office so that new carpet can be laid) it's hard to be sure, but from memory, NONE of them provides an operation "how many decimal digits in this bignum". Common Lisp has HAULONG, which tells you about how many BITS, and Java's BigInteger class offers bitLength(), which again tells you how many BITS. In Smalltalk, the obvious thing to do would be n printString size, and in Scheme it would be (string-length (number->string n)). It's not completely clear what you want. Is the "number of digits" in -123456890123456890 twenty (literally the number of digits) or twenty-one (the number of characters in the minimal decimal representation)? I must say that the Erlang reference material could be better organised. Trying to answer the question "what are all the predefined operations on numbers" is surprisingly hard. One of the great things about Erlang is that it is open source. emulator/big.c defines a function big_decimal_estimate() which gives an *estimate* of the number of decimal digits; that may be close enough for your needs. If it's not close enough, big_to_decimal() does the character conversion into a char* buffer, and doesn't build a list. Using emulator/bif.c:integer_to_list as a model, it would be quite easy to write a new BIF that gave you the number of decimal digits in a number. The question remains, WHY? For what kind of problem is it useful to know the number of decimal digits but NOT what the digits actually are? And would avoiding the list construction actually buy you very much? It must be O(#Digits) work to build the list, but finding out what the digits are is O(#Digits**2). I benchmarked the following code against length(integer_to_list(N)), for the first however many factorials. The two seemed to be about the same speed. Your mileage will of course vary. integer_digits(N) when integer(N) -> if N >= 0 -> natural_digits_loop(N, 0) ; N < 0 -> natural_digits_loop(-N, 1) end. natural_digits(N) when integer(N), N >= 0 -> natural_digits_loop(N, 0). natural_digits_loop(N, D) when N >= 10000 -> natural_digits_loop(N div 10000, D + 4); natural_digits_loop(N, D) when N >= 1000 -> D + 4; natural_digits_loop(N, D) when N >= 100 -> D + 3; natural_digits_loop(N, D) when N >= 10 -> D + 2; natural_digits_loop(_, D) -> D + 1. From ok@REDACTED Fri Mar 31 01:08:43 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Fri, 31 Mar 2006 11:08:43 +1200 (NZST) Subject: Newbie questions Message-ID: <200603302308.k2UN8hBl494405@atlas.otago.ac.nz> Raimo Niskanen wrote: Regarding your problem about determining the number of decimal digits in a number, I just came to think of a simple enough brute force O(log(N)) or rather O(NN) where NN is the number of digits in the number: digits(N) when is_integer(N), N >= 0 -> digits(N, 1, 0). digits(N, M, D) when M > N -> D; digits(N, M, D) -> digits(N, M*10, D+1). I am reading "O(NN)" as "O(#Digits)". Unfortunately, this is O(#Digits**2), because the multiplication M*10 is O(#Digits), not O(1). From ok@REDACTED Fri Mar 31 02:23:09 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Fri, 31 Mar 2006 12:23:09 +1200 (NZST) Subject: Child modules draft feedback wanted Message-ID: <200603310023.k2V0N9lP371895@atlas.otago.ac.nz> I've mentioned "child modules" in this mailing list. http://www.cs.otago.ac.nz/staffpriv/ok/childmod.htm is a *draft* description of what I'm talking about. The configuration language section is missing entirely. The Prolog version of the idea is very sketchy. But I think the description of the Erlang version is clear enough to be criticised, so if anyone would care to read it (remembering that it's a draft) and send me comments, that would be a kindness. From rlenglet@REDACTED Fri Mar 31 06:13:33 2006 From: rlenglet@REDACTED (Romain Lenglet) Date: Fri, 31 Mar 2006 13:13:33 +0900 Subject: Child modules draft feedback wanted In-Reply-To: <200603310023.k2V0N9lP371895@atlas.otago.ac.nz> References: <200603310023.k2V0N9lP371895@atlas.otago.ac.nz> Message-ID: <200603311313.33936.rlenglet@users.forge.objectweb.org> Richard A. O'Keefe wrote: > I've mentioned "child modules" in this mailing list. > > http://www.cs.otago.ac.nz/staffpriv/ok/childmod.htm > > is a *draft* description of what I'm talking about. > The configuration language section is missing entirely. > The Prolog version of the idea is very sketchy. > But I think the description of the Erlang version is clear > enough to be criticised, so if anyone would care to read it > (remembering that it's a draft) and send me comments, that > would be a kindness. Let's specify something more like existing classical component models and ADLs: In the "client" module: -module(client). -require(fred, [roast/1,]). ... dosmthng(Foo) -> fred:roast(Foo). My -require clause is almost identical to your -use_child directive, and the module name is not a real module name, but rather a "logical" one which scope is limited to the declaring module. And an ADL (yes, the "configuration language" here is in fact an Architecture Description Language) would allow to specify the dependencies *outside* of the code, as bindings between the -requires clauses' logical module names and actual module names. Just like your proposal, this follows the Principle of Separation of Concerns, hence it allows to modify dependencies without modifying the code and recompiling. Modifying dependencies could be done either at compile time or run-time (this would be an implementation detail). For instance: {bind, client, fred, actualfred} Would specify that all references to the 'fred' logical module in module 'client' must be resolved as references to actual module 'actualfred'. Of course, just like you allow several -use_child clauses, several -require clauses should be allowed in a module, hence a module could be bound to several other modules. And 'actualfred' would be a normal module. If 'actualfred' has dependencies itself (as were expressed with "To_The_Child" declarations in your proposal), then those can be expressed with -require likewise:: (there is no need to distinguish between dfferent "kinds of depencencies", and between "parent" and "child" modules) -module(actualfred). -require(parent, [beef/1]). ... bar(Bar) -> parent:beef(Bar). And in my simple ADL, it could be bound likewise: {bind, actualfred, parent, client} A complete architecture specification would be a list of 'bind' records: [{bind, client, fred, actualfred} {bind, actualfred, parent, client}] I believe that we don't need any more concepts in the ADL so far. Except perhaps your concepts of replaceable / integrated. But I believe that such things should be specified in an ADL spec rather than in the code: the spec in the code should be reduced to the strict minimal, and anything that can be specified outside of the code (typically, in the ADL) should be specified outside (Principle of Separation of Concerns...). Extensibility of such architectural mechanisms should rely on the extensibility of the ADL, not on the module source code syntax. Actually, I believe that one ADL could not cover all needs, hence we could need several ones: we need a simple ADL to cover the minimum needs (bindings between modules), but we may need a more complex, extensible ADL for the more complex cases. IMHO, my proposal: 1- is simpler than your proposal: only one new clause is added to the syntax (-require); 2- does not distinguish between "parent" and "child" modules: such a distinction is not necessary; 3- allows all that your proposal allows. This is just a simplification of your syntax and concepts. I hope this it is a step forward... -- Romain LENGLET From ok@REDACTED Fri Mar 31 07:10:02 2006 From: ok@REDACTED (Richard A. O'Keefe) Date: Fri, 31 Mar 2006 17:10:02 +1200 (NZST) Subject: Child modules draft feedback wanted Message-ID: <200603310510.k2V5A2a5498268@atlas.otago.ac.nz> Romain Lenglet wrote a reply to my request for comments. Unfortunately, he sent two copies, and I replied privately to what looked like private mail before seeing it again in this mailing list. I enclose a copy of my reply. In brief, his counter-proposal doesn't seem to do anything for most of the problems I am trying to solve. PERHAPS I AM TRYING TO SOLVE THE WRONG PROBLEMS. My basic question is "how can I get rid of -include and -ifdef and system-dependent file names in source code?" Maybe that's the wrong question. ... begin extract from reply ... Let me just repeat back what I have understood from your mail: -module(client). -require(fred, [roast/1,]). ... dosmthng(Foo) -> fred:roast(Foo). Your proposal is to add just one construction, -require(Logical_Module_Name, Imports). Except for this being a "logical" module name, it is not clear whether or how this differs from the existing -import directive. My -require clause is almost identical to your -use_child directive, and the module name is not a real module name, but rather a "logical" one which scope is limited to the declaring module. But this now GREATLY complicates the implementation. In my proposal, I was extremely careful to ensure that NOTHING about the way module: prefixes currently work would have to change. There would continue to be a single flat global name space for modules and the interpretation of a module name would NOT be in any way context- dependent. [Added in this copy:] I want it to be as easy as possible to add the new constructs to Erlang; the more that changes can be limited to the compiler, with *no* change to the semantics of full modules as such, the happier I will be. You are offering something as a feature that I regarded as a serious problem to be avoided. And an ADL (yes, the "configuration language" here is in fact an Architecture Description Language) would allow to specify the dependencies *outside* of the code, as bindings between the -requires clauses' logical module names and actual module names. Either you have missed the point of my configuration language or I have missed the point of your ADL. I am trying to solve two problems with the (as yet unwritten-up) configuration language: (1) I am trying to remove *FILE* names from the source files. I don't see where you say anything about file names. (2) I am trying to provide an alternative to -ifdef. Your ADL (no more described than mine) would appear to deal with that issue. There's an anti-point: (3) I *don't* want dependencies stated outside the source code; I want dependencies very explicit *in* the source code. Just like your proposal, this follows the Principle of Separation of Concerns, hence it allows to modify dependencies without modifying the code and recompiling. Maybe we are using the word "dependencies" to mean different things. And 'actualfred' would be a normal module. No, that's precisely what I *don't* want. I am NOT trying to set up some kind of hierarchy of normal modules. I am trying to design a principled replacement for -include AND I am trying to meet a frequently expressed "need" for some kind of hierarchical name scope WITHIN a single module. In particular, there are *TWO*-way links between an -include file and its host, and that's why -use_child and -begin_child have *two* interface lists. If 'actualfred' has dependencies itself (as were expressed with "To_The_Child" declarations in your proposal), then those can be expressed with -require likewise:: (there is no need to distinguish between dfferent "kinds of depencencies", and between "parent" and "child" modules) -module(actualfred). -require(parent, [beef/1]). ... bar(Bar) -> parent:beef(Bar). You seem to be taking the present system of modules and adding two things: (1) a module might be known by different names in other modules (2) the mapping "In real module X, logical module name Y means real module Z" is expressed outside the source files. It is as if you are asking "How could we make the present system more flexible?" But I am saying "the present system of relying on -include files is intolerable; how can we replace -include?" I believe that we don't need any more concepts in the ADL so far. Except perhaps your concepts of replaceable / integrated. But I believe that such things should be specified in an ADL spec rather than in the code: That won't work. Remember, the heart and soul of my proposal is GETTING RID OF -include. There are things that -include can do which normal imports CANNOT DO AT ALL. Many of them are things that shouldn't be done. Long term, we might even hope to replace -record with psi-terms or abstract patterns. But short term we are stuck with -record. And we need to bind a module to *some* kind of entity which can declare records, so that the module can use those record declarations. Your -require only provides a way to import functions from a logical module. That means that the -record problem is left RIGHT WHERE WE STARTED and -include is STILL needed to solve it. This creates an absolute distinction between integrated child modules and full modules: integrated child modules *CAN* provide -record information to their host, but full modules CANNOT. The fact that you *need* a record definition from somewhere is something that should be stated explicitly in the source code. The distinction between -record declarations and ordinary functions drives a distinction between child modules and full modules. IMHO, my proposal: 1- is simpler than your proposal: only one new clause is added to the syntax (-require); Yes, but it's simpler because it solves at most one of the problems I am trying to solve. 2- does not distinguish between "parent" and "child" modules: such a distinction is not necessary; But as I have just argued, it IS necessary. 3- allows all that your proposal allows. It doesn't seem to allow hardly anything of what my proposal allows. On the other hand, my proposal doesn't allow what your proposal does. My proposal does not allow a full module to be known by different names. I'm not sure how badly I want to prevent that, but it's certainly not a solution to any problem I'm trying to solve. That sounds rather negative. + Thank you VERY much for thinking about this. Even if we don't agree, the next version of the draft will be improved by me trying to clear things up. + My proposal certainly could be simplified. (In particular, there is no actual necessity for in line children, although other languages have them and they do meet a frequently expressed desired for something like block structure.) It is entirely credible that something not wholly dissimilar to your proposal COULD do the job. From lenglet@REDACTED Fri Mar 31 08:23:22 2006 From: lenglet@REDACTED (Romain Lenglet) Date: Fri, 31 Mar 2006 15:23:22 +0900 Subject: Child modules draft feedback wanted In-Reply-To: <200603310510.k2V5A2a5498268@atlas.otago.ac.nz> References: <200603310510.k2V5A2a5498268@atlas.otago.ac.nz> Message-ID: <200603311523.23079.lenglet@csg.is.titech.ac.jp> Richard A. O'Keefe wrote: > Romain Lenglet wrote > a reply to my request for comments. Unfortunately, he > sent two copies, and I replied privately to what looked like > private mail before seeing it again in this mailing list. > > I enclose a copy of my reply. In brief, his counter-proposal > doesn't seem to do anything for most of the problems I am > trying to solve. PERHAPS I AM TRYING TO SOLVE THE WRONG > PROBLEMS. My basic question is "how can I get rid of -include > and -ifdef and system-dependent file names in source code?" > Maybe that's the wrong question. No, I am trying to solve that problem: removing the use of -include, .hrl files, and -ifdef when those mechanisms are used to select different alternatives in an architecture / configuration. But my hypothesis (which I should have stated explicitly, sorry) is that architecture alternative selection should be done at the granularity of modules (not arbitrary pieces of code). This is more restrictive that the -include/-ifdef mechanisms, since you must encapsulate every alternative in modules/functions, but such a requirement has never been a problem in existing component models. It even has a good effect on code structuring, and eases maintenance. I am not trying to adapt the include mechanisms as you do (by introducing the concepts of "parent" and "child" modules), but rather to remove that concept entirely, and use only the existing concept of modules, and a way to specify dependencies between modules in a flexible way, so as to allow to easily select architectures without modifying the code. > Your proposal is to add just one construction, > > -require(Logical_Module_Name, Imports). > > Except for this being a "logical" module name, it is not clear > whether or how this differs from the existing -import > directive. Yes, that is the only difference with the -import module. The advantage over import is that the module does not depend on a particular real module statically. That binding can be stated outside of the module. With -import, the binding is static. > My -require clause is almost identical to your -use_child > directive, and the module name is not a real module name, but > rather a "logical" one which scope is limited to the > declaring module. > > But this now GREATLY complicates the implementation. It can very easily be implemented by program transformation at compile time. No need to change the runtime. > In my proposal, I was extremely careful to ensure that NOTHING > about the way module: prefixes currently work would have to > change. There would continue to be a single flat global name > space for modules and the interpretation of a module name > would NOT be in any way context- dependent. [Added in this > copy:] I want it to be as easy as possible to add the new > constructs to Erlang; the more that changes can be limited to > the compiler, with *no* change to the semantics of full > modules as such, the happier I will be. Replacing a "logical" module name in module:call() statements in the scope in a module, by an actual module name, can be done at compile time very easily. > You are offering something as a feature that I regarded as a > serious problem to be avoided. I don't see where there is a serious problem. > And an ADL (yes, the "configuration language" here is in fact > an Architecture Description Language) would allow to specify > the dependencies *outside* of the code, as bindings between > the -requires clauses' logical module names and actual module > names. > > Either you have missed the point of my configuration language > or I have missed the point of your ADL. I am trying to solve > two problems with the (as yet unwritten-up) configuration > language: > > (1) I am trying to remove *FILE* names from the source > files. I don't see where you say anything about file names. Since module names are also file names, dealing with logical module names instead of real module names removes file names from the source files. > (2) I am trying to provide an alternative to -ifdef. > Your ADL (no more described than mine) would appear to > deal with that issue. As I understand it, the main use of -ifdef, etc. is to select alternative architectures. Using an ADL, you would simply have to write several specs. > There's an anti-point: > > (3) I *don't* want dependencies stated outside the source > code; I want dependencies very explicit *in* the source code. *Dependencies* (what is required) must be stated in a module's source code, since they are very interdependent on the module's implementation (you cannot change one without changing the other). On the other hand, *bindings* should be stated outside, because the architecture (the configuration) is a concern separated from the modules implementation. [...] > Maybe we are using the word "dependencies" to mean different > things. Maybe. > > And 'actualfred' would be a normal module. > > No, that's precisely what I *don't* want. I am NOT trying to > set up some kind of hierarchy of normal modules. Modules and their bindings form a "flat" graph: they do not necessarily form a tree, or "hierarchy". > I am trying > to design a principled replacement for -include AND I am > trying to meet a frequently expressed "need" for some kind of > hierarchical name scope WITHIN a single module. And this contradicts my basic principle (which I state in the beginning of this email, sorry I should have stated this before): configuration alternatives ("what can be required") should be specified as functions in modules. We already have functions, and modules, and they can be used as units for selecting configuration alternatives. We don't need to introduce new concepts ("parent" and "child" modules) only for that purpose. > In particular, there are *TWO*-way links between an -include > file and its host, and that's why -use_child and -begin_child > have *two* interface lists. No. There *may* be two links. Or more (to other modules, etc.). And those links can be expressed in a unique simple way, with my -require clause. Our proposals are equivalent, except that -require is more general. > You seem to be taking the present system of modules and adding > two things: (1) a module might be known by different names in > other modules (2) the mapping "In real module X, logical > module name Y means real module Z" is expressed outside the > source files. Exactly. > It is as if you are asking "How could we make the present > system more flexible?" But I am saying "the present system of > relying on -include files is intolerable; how can we replace > -include?" My system replaces -ifdef and -include when it is used to select configuration alternatives. It replaces that (taken from your own document): -ifdef(use_ping). -include("ping.hrl"). -else. -include("pong.hrl"). -endif. When ping.hrl and pong.hrl contain alternative implementations. > I believe that we don't need any more concepts in the ADL so > far. Except perhaps your concepts of replaceable / integrated. > But I believe that such things should be specified in an ADL > spec rather than in the code: > > That won't work. Remember, the heart and soul of my proposal > is GETTING RID OF -include. There are things that -include > can do which normal imports CANNOT DO AT ALL. Many of them > are things that shouldn't be done. I agree very much with you. And my proposal does not replace all uses of -include / -ifdef, but it solves at least the important problem of selecting alternative implementations. > Long term, we might even > hope to replace -record with psi-terms or abstract patterns. > But short term we are stuck with -record. I also agree with you, in that my proposal does not replace -record. What I would like is to introduce one construct for every actual usage of -include / -ifdef, until all uses are covered by better constructs, and we can get rid of / deprecate -include and -ifdef. I think that my -require proposal covers well the problem of selecting alternative implementations. We still need to find a replacement for records, etc. I think that it is a bad idea to try to specify a single construct that would replace all actual usages of -include and -ifdef: either the new construct will be as bad as what it replaces, or it will not cover all usages. Maybe you have arguments against that, but please write them. ;-) > And we need to bind > a module to *some* kind of entity which can declare records, > so that the module can use those record declarations. Your > -require only provides a way to import functions from a > logical module. That means that the -record problem is left > RIGHT WHERE WE STARTED and -include is STILL needed to solve > it. I agree. But let's solve one problem at a time?! > This creates an absolute distinction between integrated child > modules and full modules: integrated child modules *CAN* > provide -record information to their host, but full modules > CANNOT. Why not have a specific construct or convention to get record information at a module level (not at a source code level, like -record). For instance, for behaviour, the functions to implement are specified in the behaviour module as a function: behaviour_info/1. What not have a similar record_info/1 function??? That function would return the list of field names (as atoms) for the record name given as an atom as an argument. E.g.: -module(record_spec). -export([record_info/1]). record_info(record1) -> [field1, field2]; record_info(record2) -> [field1, field2, field3]. And in using modules, we could use a modified -record construct: -record(Module, RecordName). It would be like an "import of record definition". E.g.: -record(record_spec, record1). We all are familiar with such constructs, support for such constructs already exists in the compiler (for behaviours), and we get rid of the need for -include for records. What do you think of that? A specific solution for a specific problem: records specifications. And my -require proposal covers the usage of -include, etc. for alternative selection. Are there other usages that must be covered? > The fact that you *need* a record definition from > somewhere is something that should be stated explicitly in the > source code. The distinction between -record declarations and > ordinary functions drives a distinction between child modules > and full modules. Doesn't my proposal just above meet that requirement? > IMHO, my proposal: > 1- is simpler than your proposal: only one new clause is > added to the syntax (-require); > > Yes, but it's simpler because it solves at most one of the > problems I am trying to solve. That's right. > 2- does not distinguish between "parent" and "child" modules: > such a distinction is not necessary; > > But as I have just argued, it IS necessary. Again, I don't think so. > 3- allows all that your proposal allows. > > It doesn't seem to allow hardly anything of what my proposal > allows. But you seem to try to cover all usages of the preprocessor with a unique construct, which I think is a bad idea, and forces you to introduce new concepts such as "parent" and "child" modules. I propose instead to replace every usage by a specific solution. > On the other hand, my proposal doesn't allow what your > proposal does. My proposal does not allow a full module to be > known by different names. I'm not sure how badly I want to > prevent that, but it's certainly not a solution to any problem > I'm trying to solve. > > That sounds rather negative. > > + Thank you VERY much for thinking about this. > Even if we don't agree, the next version of the draft will > be improved by me trying to clear things up. > > + My proposal certainly could be simplified. (In particular, > there is no actual necessity for in line children, although > other languages have them and they do meet a frequently > expressed desired for something like block structure.) It is > entirely credible that something not wholly dissimilar to your > proposal COULD do the job. -- Romain LENGLET From rlenglet@REDACTED Fri Mar 31 08:43:01 2006 From: rlenglet@REDACTED (Romain Lenglet) Date: Fri, 31 Mar 2006 15:43:01 +0900 Subject: Child modules draft feedback wanted In-Reply-To: <200603310510.k2V5A2a5498268@atlas.otago.ac.nz> References: <200603310510.k2V5A2a5498268@atlas.otago.ac.nz> Message-ID: <200603311543.01303.rlenglet@users.forge.objectweb.org> Richard A. O'Keefe wrote: > Romain Lenglet wrote > a reply to my request for comments. ?Unfortunately, he > sent two copies, and I replied privately to what looked like > private mail before seeing it again in this mailing list. > > I enclose a copy of my reply. ?In brief, his counter-proposal > doesn't seem to do anything for most of the problems I am > trying to solve. ?PERHAPS I AM TRYING TO SOLVE THE WRONG > PROBLEMS. My basic question is "how can I get rid of -include > and -ifdef and system-dependent file names in source code?" > Maybe that's the wrong question. No, I am trying to solve that problem: removing the use of -include, .hrl files, and -ifdef when those mechanisms are used to select different alternatives in an architecture / configuration. But my hypothesis (which I should have stated explicitly, sorry) is that architecture alternative selection should be done at the granularity of modules (not arbitrary pieces of code). This is more restrictive that the -include/-ifdef mechanisms, since you must encapsulate every alternative in modules/functions, but such a requirement has never been a problem in existing component models. It even has a good effect on code structuring, and eases maintenance. I am not trying to adapt the include mechanisms as you do (by introducing the concepts of "parent" and "child" modules), but rather to remove that concept entirely, and use only the existing concept of modules, and a way to specify dependencies between modules in a flexible way, so as to allow to easily select architectures without modifying the code. > Your proposal is to add just one construction, > > ? ? -require(Logical_Module_Name, Imports). > > Except for this being a "logical" module name, it is not clear > whether or how this differs from the existing -import > directive. Yes, that is the only difference with the -import module. The advantage over import is that the module does not depend on a particular real module statically. That binding can be stated outside of the module. With -import, the binding is static. > ??????My -require clause is almost identical to your -use_child > ??????directive, and the module name is not a real module name, but > ??????rather a "logical" one which scope is limited to the > declaring module. > > But this now GREATLY complicates the implementation. It can very easily be implemented by program transformation at compile time. No need to change the runtime. > In my proposal, I was extremely careful to ensure that NOTHING > about the way module: prefixes currently work would have to > change. There would continue to be a single flat global name > space for modules and the interpretation of a module name > would NOT be in any way context- dependent. ?[Added in this > copy:] ?I want it to be as easy as possible to add the new > constructs to Erlang; the more that changes can be limited to > the compiler, with *no* change to the semantics of full > modules as such, the happier I will be. Replacing a "logical" module name in module:call() statements in the scope in a module, by an actual module name, can be done at compile time very easily. > You are offering something as a feature that I regarded as a > serious problem to be avoided. I don't see where there is a serious problem. > ??????And an ADL (yes, the "configuration language" here is in fact > an Architecture Description Language) would allow to specify > the dependencies *outside* of the code, as bindings between > the -requires clauses' logical module names and actual module > names. > > Either you have missed the point of my configuration language > or I have missed the point of your ADL. ?I am trying to solve > two problems with the (as yet unwritten-up) configuration > language: > > ? ? (1) I am trying to remove *FILE* names from the source > files. I don't see where you say anything about file names. Since module names are also file names, dealing with logical module names instead of real module names removes file names from the source files. > ? ? (2) I am trying to provide an alternative to -ifdef. > ? ? ? ? Your ADL (no more described than mine) would appear to > deal with that issue. As I understand it, the main use of -ifdef, etc. is to select alternative architectures. Using an ADL, you would simply have to write several specs. > There's an anti-point: > > ? ? (3) I *don't* want dependencies stated outside the source > code; I want dependencies very explicit *in* the source code. *Dependencies* (what is required) must be stated in a module's source code, since they are very interdependent on the module's implementation (you cannot change one without changing the other). On the other hand, *bindings* should be stated outside, because the architecture (the configuration) is a concern separated from the modules implementation. [...] > Maybe we are using the word "dependencies" to mean different > things. Maybe. > > ??????And 'actualfred' would be a normal module. > > No, that's precisely what I *don't* want. ?I am NOT trying to > set up some kind of hierarchy of normal modules. Modules and their bindings form a "flat" graph: they do not necessarily form a tree, or "hierarchy". > I am trying > to design a principled replacement for -include AND I am > trying to meet a frequently expressed "need" for some kind of > hierarchical name scope WITHIN a single module. And this contradicts my basic principle (which I state in the beginning of this email, sorry I should have stated this before): configuration alternatives ("what can be required") should be specified as functions in modules. We already have functions, and modules, and they can be used as units for selecting configuration alternatives. We don't need to introduce new concepts ("parent" and "child" modules) only for that purpose. > In particular, there are *TWO*-way links between an -include > file and its host, and that's why -use_child and -begin_child > have *two* interface lists. No. There *may* be two links. Or more (to other modules, etc.). And those links can be expressed in a unique simple way, with my -require clause. Our proposals are equivalent, except that -require is more general. > You seem to be taking the present system of modules and adding > two things: (1) a module might be known by different names in > other modules (2) the mapping "In real module X, logical > module name Y means real module Z" is expressed outside the > source files. Exactly. > It is as if you are asking "How could we make the present > system more flexible?" ?But I am saying "the present system of > relying on -include files is intolerable; how can we replace > -include?" My system replaces -ifdef and -include when it is used to select configuration alternatives. It replaces that (taken from your own document): -ifdef(use_ping). -include("ping.hrl"). -else. -include("pong.hrl"). -endif. When ping.hrl and pong.hrl contain alternative implementations. > ??????I believe that we don't need any more concepts in the ADL so > far. Except perhaps your concepts of replaceable / integrated. > But I believe that such things should be specified in an ADL > spec rather than in the code: > > That won't work. ?Remember, the heart and soul of my proposal > is GETTING RID OF -include. ?There are things that -include > can do which normal imports CANNOT DO AT ALL. ?Many of them > are things that shouldn't be done. I agree very much with you. And my proposal does not replace all uses of -include / -ifdef, but it solves at least the important problem of selecting alternative implementations. > Long term, we might even > hope to replace -record with psi-terms or abstract patterns. > But short term we are stuck with -record. I also agree with you, in that my proposal does not replace -record. What I would like is to introduce one construct for every actual usage of -include / -ifdef, until all uses are covered by better constructs, and we can get rid of / deprecate -include and -ifdef. I think that my -require proposal covers well the problem of selecting alternative implementations. We still need to find a replacement for records, etc. I think that it is a bad idea to try to specify a single construct that would replace all actual usages of -include and -ifdef: either the new construct will be as bad as what it replaces, or it will not cover all usages. Maybe you have arguments against that, but please write them. ;-) > And we need to bind > a module to *some* kind of entity which can declare records, > so that the module can use those record declarations. ?Your > -require only provides a way to import functions from a > logical module. ?That means that the -record problem is left > RIGHT WHERE WE STARTED and -include is STILL needed to solve > it. I agree. But let's solve one problem at a time?! > This creates an absolute distinction between integrated child > modules and full modules: ?integrated child modules *CAN* > provide -record information to their host, but full modules > CANNOT. Why not have a specific construct or convention to get record information at a module level (not at a source code level, like -record). For instance, for behaviour, the functions to implement are specified in the behaviour module as a function: behaviour_info/1. What not have a similar record_info/1 function??? That function would return the list of field names (as atoms) for the record name given as an atom as an argument. E.g.: -module(record_spec). -export([record_info/1]). record_info(record1) -> ? ? [field1, field2]; record_info(record2) -> ? ? [field1, field2, field3]. And in using modules, we could use a modified -record construct: -record(Module, RecordName). It would be like an "import of record definition". E.g.: -record(record_spec, record1). We all are familiar with such constructs, support for such constructs already exists in the compiler (for behaviours), and we get rid of the need for -include for records. What do you think of that? A specific solution for a specific problem: records specifications. And my -require proposal covers the usage of -include, etc. for alternative selection. Are there other usages that must be covered? > The fact that you *need* a record definition from > somewhere is something that should be stated explicitly in the > source code. ?The distinction between -record declarations and > ordinary functions drives a distinction between child modules > and full modules. Doesn't my proposal just above meet that requirement? > ??????IMHO, my proposal: > ??????1- is simpler than your proposal: only one new clause is > added to the syntax (-require); > > Yes, but it's simpler because it solves at most one of the > problems I am trying to solve. That's right. > ??????2- does not distinguish between "parent" and "child" modules: > ??????such a distinction is not necessary; > > But as I have just argued, it IS necessary. Again, I don't think so. > ??????3- allows all that your proposal allows. > > It doesn't seem to allow hardly anything of what my proposal > allows. But you seem to try to cover all usages of the preprocessor with a unique construct, which I think is a bad idea, and forces you to introduce new concepts such as "parent" and "child" modules. I propose instead to replace every usage by a specific solution. > On the other hand, my proposal doesn't allow what your > proposal does. My proposal does not allow a full module to be > known by different names. I'm not sure how badly I want to > prevent that, but it's certainly not a solution to any problem > I'm trying to solve. > > That sounds rather negative. > > + Thank you VERY much for thinking about this. > ? Even if we don't agree, the next version of the draft will > be improved by me trying to clear things up. > > + My proposal certainly could be simplified. ?(In particular, > there is no actual necessity for in line children, although > other languages have them and they do meet a frequently > expressed desired for something like block structure.) ?It is > entirely credible that something not wholly dissimilar to your > proposal COULD do the job. -- Romain LENGLET From csanto@REDACTED Fri Mar 31 11:03:33 2006 From: csanto@REDACTED (Corrado Santoro) Date: Fri, 31 Mar 2006 11:03:33 +0200 Subject: A new process that does not appear in appmon Message-ID: <442CF065.1070107@diit.unict.it> Hi all, I have an application with a supervisor that runs tree gen_servers. Now I've added another process to the supervisor's children: it starts correctly, but it does not appear in appmon. What's wrong? I think that it could be a supid thing, but I'm not figuring out what's happening... Thanks, --Corrado -- ====================================================== Eng. Corrado Santoro, Ph.D. University of Catania - Engineering Faculty Department of Computer Science and Telecommunications Engineering Viale A. Doria, 6 - 95125 CATANIA (ITALY) Tel: +39 095 7382380 +39 095 7387035 +39 095 7382365 +39 095 7382364 VoIP: sip:7035@REDACTED Fax: +39 095 7382397 EMail: csanto@REDACTED Personal Home Page: http://www.diit.unict.it/users/csanto NUXI Home Page: http://nuxi.diit.unict.it ====================================================== From ulf.wiger@REDACTED Fri Mar 31 11:21:38 2006 From: ulf.wiger@REDACTED (Ulf Wiger (AL/EAB)) Date: Fri, 31 Mar 2006 11:21:38 +0200 Subject: Child modules draft feedback wanted Message-ID: Romain Lenglet wrote: > > Why not have a specific construct or convention to get record > information at a module level (not at a source code level, > like -record). For instance, for behaviour, the functions to > implement are specified in the behaviour module as a function: > behaviour_info/1. What not have a similar record_info/1 > function??? That function would return the list of field > names (as atoms) for the record name given as an atom as an argument. > > E.g.: > > -module(record_spec). > -export([record_info/1]). > record_info(record1) -> > [field1, field2]; > record_info(record2) -> > [field1, field2, field3]. > This was basically proposed by Mats Cronqvist not long ago: http://www.erlang.org/ml-archive/erlang-questions/200509/msg00260.html Since it is not allowed today to define a function record_info/1 in a module, it would be very easy to have the compiler generate and export such a function, rather than providing it only as a pseudo-function. > And in using modules, we could use a modified -record construct: > -record(Module, RecordName). > It would be like an "import of record definition". E.g.: > -record(record_spec, record1). I would prefer -use_record or -import_record, to avoid overloading. /Ulf W From svenolof@REDACTED Fri Mar 31 11:45:46 2006 From: svenolof@REDACTED (Sven-Olof Nystr|m) Date: Fri, 31 Mar 2006 11:45:46 +0200 Subject: Newbie questions In-Reply-To: <200603302306.k2UN6SeE495849@atlas.otago.ac.nz> References: <200603302306.k2UN6SeE495849@atlas.otago.ac.nz> Message-ID: <17452.64074.324026.377780@harpo.it.uu.se> Richard A. O'Keefe writes: > Nick Linker wrote: > I'm sorry, but I meant quite different question: given a big > number N, how to compute the number of digits? > > There is obvious solution: length(integer_to_list(N)), but it > is not very efficient. I wanted to have a bit faster > solution... Ok, nevermind. > > I'm aware of (and have used) several programming languages providing > as-long-as-necessary integer arithmetic. Without consulting manuals > (everything is being moved out of my office so that new carpet can > be laid) it's hard to be sure, but from memory, NONE of them provides > an operation "how many decimal digits in this bignum". Common Lisp > has HAULONG, which tells you about how many BITS, and Java's > BigInteger class offers bitLength(), which again tells you how many > BITS. In Smalltalk, the obvious thing to do would be n printString size, > and in Scheme it would be (string-length (number->string n)). You seem to have your Lisps confused :-). Haulong is a function in MacLisp; the corresponding operation in Common Lisp is integer-length. Both functions give the length of the integer in bits. > It's not completely clear what you want. Is the "number of digits" > in -123456890123456890 twenty (literally the number of digits) or > twenty-one (the number of characters in the minimal decimal representation)? Curiously, both CL's integer-length and Java's bitLength() define the length of negative numbers so that -1 has length 0 (same as the length of 0, of course). Sven-Olof From gunilla@REDACTED Fri Mar 31 11:56:29 2006 From: gunilla@REDACTED (Gunilla Arendt) Date: Fri, 31 Mar 2006 11:56:29 +0200 Subject: A new process that does not appear in appmon In-Reply-To: <442CF065.1070107@diit.unict.it> References: <442CF065.1070107@diit.unict.it> Message-ID: <442CFCCD.6050800@erix.ericsson.se> First thing to check, is the child process linked to the supervisor? Regards, Gunilla Corrado Santoro wrote: > Hi all, > > I have an application with a supervisor that runs tree gen_servers. Now > I've added another process to the supervisor's children: it starts > correctly, but it does not appear in appmon. What's wrong? I think that > it could be a supid thing, but I'm not figuring out what's happening... > > Thanks, > --Corrado > From vlad.xx.dumitrescu@REDACTED Fri Mar 31 11:57:19 2006 From: vlad.xx.dumitrescu@REDACTED (Vlad Dumitrescu XX (LN/EAB)) Date: Fri, 31 Mar 2006 11:57:19 +0200 Subject: Record functions and other nifty features Message-ID: <11498CB7D3FCB54897058DE63BE3897C016FCCD7@esealmw105.eemea.ericsson.se> Hi, > -----Original Message----- > From: Ulf Wiger (AL/EAB) > Romain Lenglet wrote: > > Why not have a specific construct or convention to get record > > information at a module level (not at a source code level, like > > -record). > > This was basically proposed by Mats Cronqvist not long ago: > http://www.erlang.org/ml-archive/erlang-questions/200509/msg00260.html There are several such proposals for "good to have" features around, wouldn't it be interesting to put them all together in some place (a wiki or trapexit, maybe?) and have some kind of voting for them? Regards, Vlad From csanto@REDACTED Fri Mar 31 12:28:07 2006 From: csanto@REDACTED (Corrado Santoro) Date: Fri, 31 Mar 2006 12:28:07 +0200 Subject: A new process that does not appear in appmon In-Reply-To: <442CFCCD.6050800@erix.ericsson.se> References: <442CF065.1070107@diit.unict.it> <442CFCCD.6050800@erix.ericsson.se> Message-ID: <442D0437.8030703@diit.unict.it> Hi Gunilla, Gunilla Arendt wrote: > First thing to check, is the child process linked to the supervisor? I've said that I was sure it was a stupid mistake... and it is! Due to a cut&paste side-effect, I used "start" instead of "start_link" :-) Everything is OK now. Thanks! --Corrado -- ====================================================== Eng. Corrado Santoro, Ph.D. University of Catania - Engineering Faculty Department of Computer Science and Telecommunications Engineering Viale A. Doria, 6 - 95125 CATANIA (ITALY) Tel: +39 095 7382380 +39 095 7387035 +39 095 7382365 +39 095 7382364 VoIP: sip:7035@REDACTED Fax: +39 095 7382397 EMail: csanto@REDACTED Personal Home Page: http://www.diit.unict.it/users/csanto NUXI Home Page: http://nuxi.diit.unict.it ====================================================== From raimo@REDACTED Fri Mar 31 13:23:55 2006 From: raimo@REDACTED (Raimo Niskanen) Date: 31 Mar 2006 13:23:55 +0200 Subject: Newbie questions References: <200603302306.k2UN6SeE495849@atlas.otago.ac.nz>, <17452.64074.324026.377780@harpo.it.uu.se> Message-ID: svenolof@REDACTED (Sven-Olof Nystr|m) writes: > Richard A. O'Keefe writes: > > Nick Linker wrote: > > I'm sorry, but I meant quite different question: given a big > > number N, how to compute the number of digits? > > > > There is obvious solution: length(integer_to_list(N)), but it > > is not very efficient. I wanted to have a bit faster > > solution... Ok, nevermind. > > > > I'm aware of (and have used) several programming languages providing > > as-long-as-necessary integer arithmetic. Without consulting manuals > > (everything is being moved out of my office so that new carpet can > > be laid) it's hard to be sure, but from memory, NONE of them provides > > an operation "how many decimal digits in this bignum". Common Lisp > > has HAULONG, which tells you about how many BITS, and Java's > > BigInteger class offers bitLength(), which again tells you how many > > BITS. In Smalltalk, the obvious thing to do would be n printString size, > > and in Scheme it would be (string-length (number->string n)). > > > You seem to have your Lisps confused :-). Haulong is a function in > MacLisp; the corresponding operation in Common Lisp is > integer-length. Both functions give the length of the integer in bits. > > > It's not completely clear what you want. Is the "number of digits" > > in -123456890123456890 twenty (literally the number of digits) or > > twenty-one (the number of characters in the minimal decimal representation)? > > Curiously, both CL's integer-length and Java's bitLength() define the > length of negative numbers so that -1 has length 0 (same as the length > of 0, of course). > That seems to be some kind of 2-complement notion of bit length. If you define the bit length of negative numbers to be the number of bits from the right to the last non-1 as opposed to for positive numbers to the last non-0, you would get this. > > Sven-Olof > -- / Raimo Niskanen, Erlang/OTP, Ericsson AB