[erlang-questions] Two beautiful programs - or web programming made easy

Edmond Begumisa ebegumisa@REDACTED
Mon Feb 14 21:17:20 CET 2011


Here's where you loose me: Say I have a js function foo, that I want to
call from my browser. Note that unlike JSON, this is an actual function
and should therefore contain code (so parsing out the code wouldn't make
any sense)...

a) Static version: Stick foo in a file foo.js on the server. Erlang side
streams the file to the browser, which reads it, interprets it and runs it.
b) Eval version: Stick foo in a message. Erlang side streams the function
to the browser, which using eval, reads it, interprets it and runs it.

Why is b) suddenly crossing some threshold of risk that is not equally
inherent to a)? What unacceptable extra risk am I introducing by doing b)
instead of a)? Why does doing b) suddenly require N times more competence
and security conciseness that you don't trust yourself and would much
prefer a) instead?

     From what I've read, it appears that you'd never use js at all in your
websites.

- Edmond -

On Mon, 14 Feb 2011 12:39:06 +1100, Frédéric Trottier-Hébert
<fred.hebert@REDACTED> wrote:

> Replies are still in between bits of text.
> On 2011-02-13, at 15:43 PM, Edmond Begumisa wrote:
>
>> On Mon, 14 Feb 2011 05:59:19 +1100, Frédéric Trottier-Hébert  
>> <fred.hebert@REDACTED> wrote:
>>
>>>
>>> On 2011-02-12, at 06:33 AM, Joe Armstrong wrote:
>>>
>>>>
>>>> The Javascript equivalent is:
>>>>
>>>>  function onMessage(evt) {
>>>>     eval(evt.data);
>>>>  }
>>>>
>>>> Where the data comes from a websocket.
>>>>
>>> This is rather risky. Eval will take any code whatsoever and run it  
>>> for you.
>>
>> Likewise the browser will take any static js (<script> tags) whatsoever  
>> from your server and run it for you.
>
> Right. This is why ideally you want to pass in very precise function and  
> do something RPC-like (despite Joe not liking it) or have your own  
> parser (as it is the case with JSON). It's not that it's impossible to  
> make the other ways safe, but I wouldn't trust most people (including  
> myself) to get it right most of the time.
>>
>>> If you have dynamic content, without proper escaping and being very  
>>> careful, users could run arbitrary code in your page, including stuff  
>>> to steal session data and send it over to either some other site, or  
>>> perform actions for the user which they do not necessarily approve on  
>>> (making their profile public, closing their account, worms, etc.)
>>>
>>
>> Likewise if you have any dynamic content in js code on your server  
>> without proper escaping and not being very careful, users could...
>
> Exactly. You're always as safe as your weakest link. Some frameworks  
> (like JQuery on some methods) handle it for you, but usually there is no  
> such thing for 'eval'.
>
>> ... Don't the "same source" XXS rule for non-evaled code apply to  
>> evaled code? Doesn't the same duty of care to end-users for protecting  
>> privacy, properly escaping data, etc, apply in both cases? Don't you  
>> have to be careful either way?
>>
>
> Yes. But some ways to do things are safer than others by default. The  
> problem with 'doing things right' is how much trust you put in yourself  
> and your team of developers. I'm of the opinion that most people who  
> feel good enough to handle security actually overlook a lot of it. Have  
> you always checked everything for XSS in all encodings? CSRF? Ever used  
> something like MD5 or SHA to hash passwords? Sent such passwords over  
> e-mail, etc? Those are very basic options and I can tell you that most  
> developers to have worked on the web had a problem with at least one of  
> these at some point or another one. Hell, even gmail had sever CSRF  
> holes at some point that let people randomly inject themselves into your  
> forwarded email adresses.
>
> Security is hard, and stepping clear of the risky line is often a good  
> option if you're not 100% sure of what you're doing. A cook skilled  
> enough can likely prepare a meal while juggling with knives safely, but  
> it's often not necessary to do so, and often not appropriate for  
> everyone to follow that line either.
>
>>> In fact, this is a reason why people like Douglas Crockford prefered  
>>> to write JSON parsers rather than just evaluating them. It's just not  
>>> safe enough.
>>>
>>
>> Indeed you are correct, but...
>>
>> From http://www.json.org/js.html ...
>>
>> "...The use of eval is indicated when the source is *trusted* and  
>> *competent*..."
>
> The *competent* part is the one that worries me. I think most developers  
> (myself included) tend to overestimate their competence when it comes to  
> security.
>>
>> "...In web applications over XMLHttpRequest, communication is permitted  
>> only to the same origin that provide that page, so it is *trusted*. But  
>> it *might not be competent*. If the server is not rigorous in its JSON  
>> encoding, or if it does not scrupulously validate all of its inputs,  
>> then it could deliver invalid JSON text that could be carrying  
>> dangerous script..."
>>
>> So it boils down to the competence of the code on the server. You have  
>> to be careful how you construct your pages and javascript. But then,  
>> this should *always* be the case.
>
> Yes, agreed. Again, I'm supporting the position of 'why risk it?' not  
> the line of 'it's impossible to be safe!'
>
>>> Plus you have to call the javascript parser and whatnot, which is  
>>> usually rather slow.
>>
>> One could send core of the app logic in a static js file then have the  
>> eval only making simple calls like "appui.getInvoinces()". That will  
>> perform fairly well.
>
> Yes, if the invoices do contain fairly limited and well-defined data  
> that you know can *never* cause a problem.
>>
>>> The whole idea is pretty bad on the web, where you have to assume that  
>>> people will actively try to break your stuff and steal data from other  
>>> users (or you).
>>>
>>
>> That assumption is a bit dramatic. Questions on security cannot be  
>> viewed in isolation of application. One of my favorite quotes from  
>> Bruce Schneier is applicable here. He was once asked about the  
>> possibility of chaos ensuing due to internet security breaches...
>>
>> "No. Chaos is hard to create, even on the Internet. Here's an example.  
>> Go to Amazon.com. Buy a book without using SSL. Watch the total lack of  
>> chaos."
>
> The idea is fairly dramatic, but the concept is basically that once  
> someone's got an axe to grind against you or your applications, then  
> someone actively trying to break your stuff is *actually* going to  
> happen. A lax attitude is what made one of our products (at some  
> previous job) vulnerable to Russian hackers who ended up emailing  
> customers with addresses stolen straight out of our databases. When it  
> happens, it's already too late to react.
>>
>> I don't see how you can canvas the "whole idea" as being bad. It may  
>> require adjustments here and there. e.g For particular pages where  
>> paranoid security is needed, nothing stops you from doing it  
>> differently there. You could crypto what's sent. You could even serve  
>> those pages the standard way with static files and SSL if it makes you  
>> feel safer.
>
> SSL is protecting you against things like man-in-the-middle attacks.  
> Encryption helps you on other points. There is nothing there regarding  
> problems with application-level security. The whole idea is not bad, but  
> I would certainly want a serious specialist to look over my application  
> if I were to use that trick in many places.
>
>
>>
>>>>
>>>> This technique is amazingly powerful.
>>>>
>>>> So now I only need one generic web page. Think of that.
>>>>
>>>> Only one page is needed - forever.
>>>>
>>> This is a problem when it comes to bookmarks, sharing the link with a  
>>> friend, searchability, browser history, etc. The web wasn't exactly  
>>> intended to be a stateful thing and you'll have to resort to hacks  
>>> such as hash-bangs to get around it. I suggest reading  Tim Bray's  
>>> Broken Links to see why that isn't a good solution anyway.
>>>
>>
>> True. But this problem is an age-old general AJAX/dynamic-markup  
>> problem. I agree it might be very visible in this case.
>>
>> However, I've written XULRunner apps with no back buttons -- no need  
>> for them with easy-to-navigate UIs. Most Adobe AIR apps I've seen have  
>> no browser history. It's made me question: How badly do end-users  
>> really need those things? If they do, couldn't we give them better  
>> application-specific versions inside our web-app UI?
>
> If I'm using a browser, I'd enjoy being able to use the web. What  
> constitutes a 'very easy to use' application to you might  not be the  
> same for everyone. I do remember many flash pages falling pray to the  
> same problem. I think this is mostly a deeply rooted problem in the web  
> where you're piggy-back riding sessions on a protocol that was  
> absolutely not made for that. It sometimes works well enough (I'm  
> thinking of chat applications or even grooveshark here), so it's  
> certainly not black and white, but I figure you know what I mean.
>>
>>> Plus I'd argue that javascript and Erlang should be kept separate and  
>>> you shouldn't try to generate one with the other,
>>
>> Good point. I thought about sending the js in static files and reducing  
>> the calls from Erlang to simple one-liners. But also note that the more  
>> powerful aspect of this (IMO) is not just sending js, but sending UI  
>> elements. Sending blocks of UI to an empty page! How can anyone not  
>> like that?
>>
> Separation of concerns. JS is about behaviours on the page, dynamic  
> content. UI is both HTML (structure) and CSS (presentation). One very  
> simple question I like to ask to sort this out is "would I be able to  
> hire a designer to work on my site without guiding them around too  
> much?" "Could I hire someone to just work on my javascript and HTML  
> without them needing to know anything else?"
>
> If you say no to these, you might have some overlapping domains in what  
> you're doing.
>
> Then again, I'm a fan of really well-separated components in my  
> applications, which is why I like Erlang's processes and OTP  
> applications in the first place :)
>
> Another advantage of keeping things separate is caching -- this is  
> however pretty application and audience specific in terms of needs and  
> requirements.
>
>> - Edmond -
>>
>>> but at this point, I figure it's more of a matter of who wants to give  
>>> himself the trouble than anything.
>>>
>>>
>>> --
>>> Fred Hébert
>>> http://www.erlang-solutions.com
>>>
>>>
>>
>>
>> --
>> Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
>
>
> --
> Fred Hébert
> http://www.erlang-solutions.com
>
>
> ________________________________________________________________
> erlang-questions (at) erlang.org mailing list.
> See http://www.erlang.org/faq.html
> To unsubscribe; mailto:erlang-questions-unsubscribe@REDACTED
>


-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/


More information about the erlang-questions mailing list