Discussions » Greasy Fork Feedback

@require validations

§
Posted: 2014-03-03
Edited: 2014-03-03

@require validations

(Ignore the @require links in this topic, that's the forum thinking we're talking about a user named "require".)

Greasemonkey and friends have a @require meta directive that lets authors pull in scripts from outside sources. While this is useful for authors, it represents a potential vector for malicious code. Users who inspect the code posted to Greasy Fork may not think to inspect any @requires, and even if they do, there is no guarantee that the @require will not silently change in the future.

Because of these issues, I implemented a whitelist of things scripts could @require. If script authors attempted to use anything outside the whitelist, they were instructed to e-mail me what they were @requiring, and I would potentially add to the whitelist. This is clearly cumbersome and was only intended as a temporary measure, and after hearing feedback on this feature I moved reworking it up the priority list.

Starting now, instead of being blocked from posting scripts that @require outside the whitelist, authors are now given the option to put their script "under assessment". Their scripts will be saved, but not made public (e.g. not appear in the script list) until all their @requires are in the whitelist. This can mean they change their script's @requires, or it can mean an admin has added to the whitelist. Admins will contact script authors by means of a discussion on their script.

The list of scripts currently under assessment will be monitored and I'm aiming for them to get resolved or feedback within a day.

I hope this makes things easier on script authors while still keeping script users safe. As always, feedback is welcome.

§
Posted: 2014-03-03

Safe .... but complicated no?

Why not admit all @require and wait for a "report" (with a button on the script page) of our more clever members than me, if they find something suspect in it ?.

§
Posted: 2014-03-03

A clever user is not as likely to check every @require in a script as the actual script itself. The malicious author could put up a file that looks just like, say, a jQuery library, but has a few lines of malicious code plugged into the middle. The malicious author could benign code in the @require, only to later swap it out. The malicious author could dynamically serve up the malicious code some users and not others. A small minority of script users understand code well enough to be able to spot something bad, and for these reasons, arbitrary @requires just make it much harder to spot.

This feature looks like it would completely eliminate bad @requires as a malicious code vector. Of course, a malicious author could always just put the bad code in the main part of the script, but this is much more visible, there's a version history so they can't transparently swap bad code in and out, and the site can run various checks automatically against the code to check for known malicious patterns (if you look at some of the malicious scripts on userscripts.org, you realize that it's just the same few things posted over and over).

There will be a way for users to easily report suspicious scripts, but I prefer to be as proactive as possible with these things.

§
Posted: 2014-03-06
Edited: 2014-03-06

This feature is pretty pointless and needlessly restrictive, IMO. GM_xmlHttpRequest + eval accomplishes the exact same thing, and there's no way you can proactively filter it. That, and I think people (including myself) are quickly getting fed up with the constant lock down, dumb down, nanny approach being taken by Google, Apple, etc.

§
Posted: 2014-03-06

I think there's ways I could filter XHR+eval. Probably not perfectly, though.

Just because this isn't the wild west where anything goes doesn't mean it's a walled garden. It's somewhere in the middle - trying to give authors free reign to do what they like as long as we can be reasonably sure about users' safety.

§
Posted: 2014-03-06

The problem with userscripts.org is that it has no security whatsoever. That doesn't mean we need to use whitelist or "guilty until proven innocent" security. I can think of several reasons why not:

1. Lots of people are uncomfortable letting one person be the gatekeeper, even if they have good intentions. And if you were to disappear someday like Jesse Andrews, @require for new libraries would be effectively broken.
2. A whitelist is cumbersome for developers to apply for and to maintain
3. It's just not effective. It doesn't solve the problem of people changing the external scripts after they're approved unless you do take the walled garden approach and have everything, including libraries, hosted on greasyfork. And it doesn't stop people from evaling external code, which I guarantee you will not be able to filter if they make even the most basic attempt at obfuscation:

var w=window;
var value="value";
value="e"+value.replace("ue","");
var func=w[value];
func("alert('rekt');");

My ideal site would have three layers of security:
1. Captcha to stop spambots
2. Filters to stop humans from copy/pasting Facebook auto-like and such
3. Moderators to stop humans who evade the filters

I do hope you'll reconsider this. On another security issue, I think it would be a good idea to require that all scripts have @grant metadata, then show what permissions it uses on the script's page.

§
Posted: 2014-03-06
The problem with userscripts.org is that it has no security whatsoever. That doesn't mean we need to use whitelist

Right, this was just in response to "constant lock down, dumb down, nanny approach". Just saying there's a middle somewhere between "anything goes" and "walled garden", and based on the rest of your comment, you agree.

"guilty until proven innocent" security.

This isn't "guilty until proven innocent"; this is verifiability. A script that @requires an arbitrary external URL cannot be verified. The @required script can change at any time and do anything, and this would circumvent any other measure we put in place (including the ones you suggested) to stop bad code.

Lots of people are uncomfortable letting one person be the gatekeeper, even if they have good intentions.

Understandable. I don't intend to be the only one with the keys to the whitelist.

A whitelist is cumbersome for developers to apply for and to maintain

The only thing script authors need is post their script. If a @require is outside the whitelist, they just need to check a box to send it off for assessment.

It doesn't solve the problem of people changing the external scripts after they're approved unless you do take the walled garden approach and have everything, including libraries, hosted on greasyfork.

You can see the current whitelist here. The servers outside of Greasy Fork will all be well-known, public libraries, so it's not likely they will be swapped out for malware.

And it doesn't stop people from evaling external code, which I guarantee you will not be able to filter if they make even the most basic attempt at obfuscation:

You're assuming that a filter would only be looking at the source code. What about a filter that actually ran the source code and recorded which "suspicious" functions were called?

Even assuming malware authors are clever and are able to circumvent whatever filters we put in, making it harder on them is still useful. Just because someone can pick the lock doesn't mean we shouldn't lock it in the first place, right?

Captcha to stop spambots

Will do.

Filters to stop humans from copy/pasting Facebook auto-like and such

Already done.

Moderators to stop humans who evade the filters

Also will do.

On another security issue, I think it would be a good idea to require that all scripts have @grant metadata, then show what permissions it uses on the script's page.

Interesting idea... If you have any specific thought, please share on https://github.com/JasonBarnabe/greasyfork/issues/50

Post reply

Sign in to post a reply.