Greasy Fork is available in English.

Discussioni » Feedback di Greasy Fork

Discussions, ratings, and score

§
Pubblicato: 22/10/2014
Modificato: 22/10/2014

Discussions, ratings, and score

Due to some recent changes and questions about script ratings, discussions, and scores. I'm posting this information that describes the intent of these systems. This is open to discussion and change.

Script discussions are there for script users to contact script authors. Script ratings and scores are there to give script users an indication of what other script users think of a script. Because these things exist to give script users their say, script authors are not able to edit, close, or delete them.

Script discussions are not intended as an issue tracker for script authors. If script authors want users to use their issue tracker, they should include a @supportURL meta key in their script. Greasy Fork will display this on the top of the feedback page, if provided.

Ratings can be unfair. Users can give something a bad rating for something that promptly gets fixed or was never a problem in the first place. This is the nature of ratings/reviews anywhere, and all scripts on Greasy Fork are under the same rules. I don't want myself or other moderators to be arbiters of truth; only in cases where a discussion is not at all related to a script will the discussion by dealt with by moderators.

The score a script gets is based on the ratings and the number of people who have added it their favorites. The number is calculated with a statistical method that represents a projection that if everyone rated a script, there would be a 95% chance it would receive at least x% good ratings. A favorite is considered the same as a good rating. OK ratings are considered to be 1/2 good rating, 1/2 bad rating. Every good rating brings the score closer to 100, every OK - 50, every bad - 0. As a script gets more ratings, each individual rating is worth less. Scripts with no ratings get a score of 5, which places them just below scripts with a single OK rating.

Some suggestions I have for script authors concerned about bad ratings:

  • Test your scripts thoroughly, and be clear what your script does and doesn't do.
  • Encourage people who like your script to post a good review or add it to their favorites.
  • Subscribe to notifications on discussions about your scripts.
  • Fix users' concerns promptly, reply to their discussions, and suggest they either edit their post to update their rating or to post a new discussion with the updated rating. Only one rating per user is counted, and a good rating will override anything else the user posts.
§
Pubblicato: 22/10/2014

I think that it should be stated more explicitly that the discussion page is not an issue tracker.

§
Pubblicato: 22/10/2014

When @supportURL is provided, it'll either say "Visit the author's support site." or "E-mail the author for support.". Maybe this should be a bit stronger?

§
Pubblicato: 23/10/2014

Personally I think the rating a script should be completely separate from the making a comment on a script.

wOxxOmMod
§
Pubblicato: 23/10/2014
Modificato: 23/10/2014
Personally I think the rating a script should be completely separate from the making a comment on a script.

Of course.
Current system doesn't reflect anything because only a few users post comments, while 10-1000 times more of them just use the scripts happily (or not) which should be reflected by a separate rating vote button on the script info page.

§
Pubblicato: 23/10/2014

I've found that if you let people review without entering a comment, then you get authors complaining about bad reviews where they have no idea what the problem is.

If you want to just count how many people are using the script, that's what the install and update check (new!) counts are for.

wOxxOmMod
§
Pubblicato: 23/10/2014
Modificato: 23/10/2014
I've found that if you let people review without entering a comment, then you get authors complaining about bad reviews where they have no idea what the problem is.

That's totally not a problem of the separate rating system but a problem of those authors. Currently the review-based ratings are wildly extrapolated from a few reviews to the entire user-base which is 10-1000 times larger. And another problem is that this could be headed into the mess of the compulsory review+rating system used in Firefox AMO site and Chrome appstore - useful reviews are buried under tons of mostly non-informative text entered just to gain access to the rating button. This is why the authors state that issues should be reported on a separate issue-tracker or via email or something. This makes it pretty obvious that the combined review+rating system doesn't work and will never work simply because it's a non-working concept.

If you want to just count how many people are using the script, that's what the install and update check (new!) counts are for.

The update check counts is indeed a terrific and useful addition because the total install count is hardly representative - many users may install a script to try and uninstall it a few minutes/days later. However how is this related to the final rating calculation? What's the formula? Do you use it to count users by UA strings (and so on) or is it just the number (which is hardly useful)?

§
Pubblicato: 23/10/2014
Modificato: 23/10/2014
I've found that if you let people review without entering a comment, then you get authors complaining about bad reviews where they have no idea what the problem is.

If you want to just count how many people are using the script, that's what the install and update check (new!) counts are for.

Nice feature this update check counter. Shame my scripts avoid it as my updates are usually done via OUJS.

The update check counts is indeed a terrific and useful addition because the total install count is hardly representative - many users may install a script to try and uninstall it a few minutes/days later.

Unless it has changed, the install counter only counts install via the install button. Update through GreaseMonkey updater or via script are not included.

Problem with the current system is, it is unfair. A more popular script is most likely to get a lower score as the feedbacks are going to be greater. Most of them would be neutral but a few of them would have a negative feedback due to a fixable bug. The rating never changes by the user.

A good example of this is the highest installed script: barbarossa69's "KOC Power Bot" (KPB) which has a rating of 50. A much lower rating than my "Citrus GFork" (CGF) which has 64.6. CGF has a total install of 105, KPB has a daily install of several hundred.

Only reason KOC has a worse rating is because people reported bugs that I assume got fixed. The most popular and the script that has to be continually updated to constant changes to the included site, will, by default get much more bad feedback to report bugs.

§
Pubblicato: 23/10/2014
Modificato: 23/10/2014

Current feedback:

  • No rating (just a question or comment)
  • Report script (malware, stolen code, or other bad things requiring moderator review)
  • Bad (doesn't work)
  • OK (works, but could use improvement)
  • Good (works well)

Bad (doesn't work) option is abused. Most people might love the script but just select the option as they are just trying to be helpful by reporting a bug that they would like fixed. They choose doesn't work as it seems the most logical option. It does not cross their mind that they are rating it.

In 99% of cases the bug will be fixed and in 100% of cases the issuer of the feedback won't change the rating. The idea of the Author having to ask the user to change the rating sounds bad.

One can change their opinion over time, but with current system if you have several feedbacks, you have to change them all to get the lowest score for the script as it automatically keeps the highest.

Maybe change the listing to something like:

  • Bug or Comment
  • Flag script (malware, stolen code, or other bad things requiring moderator review)
  • Rate Bad (Don't like it)
  • Rate OK (It's helps)
  • Rate Good (Love it, can't live without it)

Though I still see it the rating tied with feedback a bad idea.

§
Pubblicato: 23/10/2014
Currently the review-based ratings are wildly extrapolated from a few reviews to the entire user-base which is 10-1000 times larger.

The fact that a very small percentage of people will post reviews is true for any review system, whether it requires a discussion or is just clicking a thumbs-up button. The alternative is to not have a review system at all.

The new calculation method takes into account the number of reviews posted. For example, one "good" only brings the score to 21. The top script, with 57 "goods", gets 93.

And another problem is that this could be headed into the mess of the compulsory review+rating system used in Firefox AMO site and Chrome appstore - useful reviews are buried under tons of mostly non-informative text entered just to gain access to the rating button. This is why the authors state that issues should be reported on a separate issue-tracker or via email or something.

As an author on both of these sites, it's not the junk that makes me want people to post issues somewhere else. It's because it's hard to track and follow up on the issues provided, because it's not designed as an issue tracker but as review system. Same goes here, which is why authors can specify a support URL and that's linked to on the feedback page, above the UI for the review system.

A review system should be a review system, an issue tracker should be an issue tracker. While they are similar in many ways, they have aspects that are incompatible.

However how is [update check stats] related to the final rating calculation?

The number of installs and update checks has no effect on a script's score. The score is based only on reviews and favoriters.

Do you use it to count users by UA strings (and so on) or is it just the number (which is hardly useful)?

An IP address counts only once per script per day.

Unless it has changed, the install counter only counts install via the install button. Update through GreaseMonkey updater or via script are not included.

Correct, install counter is unique IPs clicking that green install button, update checks is unique IPs grabbing the .meta.js version.

Problem with the current system is, it is unfair. A more popular script is most likely to get a lower score as the feedbacks are going to be greater.

That's not true; as mentioned above, the score is "reinforced" by multiple ratings. No ratings start out with a score of "5", and a single OK rating is enough to bring it higher. Assuming the same ratios of good:ok:bad, and that "bad"s are in the minority, the script with more reviews will be scored higher.

The most popular and the script that has to be continually updated to constant changes to the included site, will, by default get much more bad feedback to report bugs.

Put another way, the more likely a script is to be broken (even temporarily), the lower the score it will tend to get. I consider this a feature, not a bug. Even though if it's outside of the author's control, it's a good indication of what users think of the script.

§
Pubblicato: 23/10/2014
In 99% of cases the bug will be fixed and in 100% of cases the issuer of the feedback won't change the rating. The idea of the Author having to ask the user to change the rating sounds bad.

How about if there was UI for the reviewer to change the rating when they do a reply?


* Bug or Comment
* Flag script (malware, stolen code, or other bad things requiring moderator review)
* Rate Bad (Don't like it)
* Rate OK (It's helps)
* Rate Good (Love it, can't live without it)

I'd be open to changing the text, but I think this would result in less "good"s. I think that if something has significant bugs, "OK" is warranted, so maybe that's the one that needs changing. "Works, but has significant issues"?

wOxxOmMod
§
Pubblicato: 23/10/2014
Modificato: 24/10/2014
[...] a very small percentage of people will post reviews is true for any review system [...] The alternative is to not have a review system at all.

I use *ratings* on many sites (IMDB, etc) and it's definitely easier to make a mere single click on the corresponding star in the rating box than to write a review. Thus the number of users who rate is orders of magnitude larger (1000 times difference for popular stuff is not uncommon on large sites) than the number of those who bother to write at least one sentence in a review. Now this is what makes the rating quite representative and indicative of the user opinion, isn't it? BTW, some popular sites with a huge user base (rottentomatoes, kinopoisk to name a few) show the reviewers' rating separately - a wise move considering the above.

[...] it's not designed as an issue tracker but as review system. Same goes here [...]

It doesn't apply here because it's "Feedback", not a "Review". Users are going through the "pains" of registering to post some issues or wishes, not *reviews*.

A review system should be a review system, an issue tracker should be an issue tracker.

And a rating system should be a rating system - if there is no separate rating system then there should be no ratings at all. It'll be better than showing some obscure number calculated from an inappropriate source.

Put another way, the more likely a script is to be broken (even temporarily), the lower the score it will tend to get.

I definitely see this point being quite far from correct. As TimidScript pointed out the script isn't *broken* usually, the user might even have an issue caused by a 3rd party factor (some other userscript or whatever).

[...] it's a good indication of what users think of the script.

I see it as either exactly the opposite ("deception") or something mostly unrelated to the rating (low ratings now indicate "high development activity", lol... and that the script is so valuable to users that they bother to register and provide the feedback).

§
Pubblicato: 23/10/2014
Modificato: 24/10/2014
That's not true; as mentioned above, the score is "reinforced" by multiple ratings. No ratings start out with a score of "5", and a single OK rating is enough to bring it higher. Assuming the same ratios of good:ok:bad, and that "bad"s are in the minority, the script with more reviews will be scored higher.

The way people use the feedback system is not the way you intend it or want it to be. It is without a doubt a begin use by the average user as a bug reporting system. Most script authors will not have a separate support site.

As aforementioned, most people who give feedback do not think they are rating anything. They are mainly reporting bugs through the default option of ""No rating" and a small minority through "bad" option. Sometimes they would praise the script in the comment while still selecting the "No rating" option or worse the "bad" option. Unintentional abuse.

There is little doubt, with the current system, a popular script more prone to continues updates due to necessities, is more likely to get a lower score. And most people who select "bad" option are merely trying to report a bug and not rate it.

When I first saw `Bad (doesn't work)` I honestly thought it was a way of alerting the user to a bug, and this is the way most people would see it. I had to understand the rating system before I realised its intended usage. It shouldn't be like that, it should be initiative.

How about if there was UI for the reviewer to change the rating when they do a reply?

I was going to suggest having the author raise a flag to the issuer, then the issuer is forced to state if the bug is fixed or not. Can only be raised once a week or something. Sounds awful though. Such a convoluted system should not be necessary.

I'd be open to changing the text, but I think this would result in less "good"s.

On the contrary, I think it would more than likely give more goods and less bads. Currently most feedbacks are "No rating".

Trying to associate rating with bugs is a mistake IMHO and this is where people get confused and select bad when their whole intention is merely to get a fix for a script they like. Mind you, one would not automatically associate the feedback tab with rating.


Rating should not be related to bugs. They are separate. A "bugless" script can still be rated bad, and a buggy script can still be rated Good. That's a fact. My Schmoogle script had bugs and still does but it was rated 5 Star on USO. It works well enough that people find it useful.

Without a doubt you need to change the text. It should be clear that you are rating a script and it really should not be related to whether the script has a bug or not. Maybe something like below keeping the rating separated. Perhaps using a different coloured text for rating.

- Bug or Comment
- =====================
- Flag script (malware, stolen code, or other bad things requiring moderator review)
- =====================
- Rate Bad (Don't like it)
- Rate OK (It's helps)
- Rate Good (Love it)

§
Pubblicato: 23/10/2014

As the average user you will not be willing to be guided from one page to the other just to report a bug. I see this on another page where my job is to collect bugs and report them. The users can report bugs in two ways:

  1. Write in a special area in the forum
  2. Use a ticket system

Even if there are those two options i will get other reports per forum message or IRC. Why? Because it's easier. The same applies here. Some users have an account on Greasy Fork and not on other pages like github, but hte issue tracke schould be github. Wht will they do? Create another user account on github? No! Usually they will just report it here. Also for the script authors it's easier to have the information here than being forced to think of another system where they can track issues ...

We suggested that script authors can mark a bad rating as "closed" and it then doesn't yount in the score anymore (with the option to be reopened). If you don't have something like this you should think about an easier option to change the rating. I assume that most of the user don't even know that they can change the rating as they have to edit their first post to do so (seen by one of mine scripts where the user posted an aditional "good" rating instead of editing the one where the bug was reported.

Maybe another solution would be to provide an easie issue tracker and the rating system. Users can then either report an issue or rate the script => maybe provide a second link stating "Report an issue" which always leads to an unrated entry. Advantage: Support can be provided directly on GF and also bugs don't lead to bad ratings.

Test your scripts thoroughly, and be clear what your script does and doesn't do.

I can test my scripts as good as I want, but I can never ensure that it will always work. Like many others my scripts are usually for one special page an not general. As a result the smallest change on the page can break my whole script. As I don't know when the page gets changed it's not possible to do something to prevent this breaking.

§
Pubblicato: 24/10/2014

A good example of this is the highest installed script: barbarossa69's "KOC Power Bot" (KPB) which has a rating of 50. A much lower rating than my "Citrus GFork" (CGF) which has 64.6. CGF has a total install of 105, KPB has a daily install of several hundred.

Only reason KOC has a worse rating is because people reported bugs that I assume got fixed. The most popular and the script that has to be continually updated to constant changes to the included site, will, by default get much more bad feedback to report bugs.

The KoC scripts I am custodian of are not your average scripts though - They've been in development for 5+ years by a changing team of people with very different programming styles, and are huge.... typically 10000+ lines of code each. Also they are very vulnerable to changes in the game coding, and also heavily dependent on unsafewindow.

The majority of "bugs" that are reported are caused by either changes to the game, or people running unsupported browser versions and/or greasemonkey/scriptish.

People are always going to report bugs here as a first port of call, as this is where they're downloading the scripts from. I don't think they realise that when they rate their comment they are actually rating the script as a whole. To rate a script like KoC Power Bot as a "bad" script because one small function doesn't work because of a change to the game code is nonsense.

And people never go back and change their rating.

Having said all that, as I said, these scripts are not typical of the scripts on this site, and are for a very specific purpose, so to base your rules on these doesn't really make sense either. People will not decide to install or not install the KoC scripts based on rating, but usually based on word of mouth among their friends in the game.

My concern before was that I was in danger of having a negative rating, but that doesn't seem to be the case now because of the recent changes, so I'm not really that bothered anymore about rating.

I think you should just show the number of people who have favourited the script, and just have a simple 5 start rating somewhere where they can click on a star from 1 star to 5 stars. People seem to understand that sort of rating system perfectly. No need to over-complicate things with a load of mathematics...

§
Pubblicato: 24/10/2014
Having said all that, as I said, these scripts are not typical of the scripts on this site, and are for a very specific purpose, so to base your rules on these doesn't really make sense either. People will not decide to install or not install the KoC scripts based on rating, but usually based on word of mouth among their friends in the game.

Don't know anything about the script and usually rating does not change the users mind to install it unless the value is low like 2 stars. How it is heard of is irrelevant. Rating should reflect the scripts value to the "sites" community and not how it gets noticed. Saying that, KoC script should have a much higher value than my script. It is without a doubt a more popular script.

It is not apparent in the current system what the score means and how it is evaluated. A random number of 95 means nothing unless you know its out of a 100 and how it is calculated. Actually even if you know, it still means little. The install and bookmark count is still more reflective.

I think you should just show the number of people who have favourited the script, and just have a simple 5 start rating somewhere where they can click on a star from 1 star to 5 stars. People seem to understand that sort of rating system perfectly. No need to over-complicate things with a load of mathematics...

+1
OUJS system would be the next best thing, though at one time I though it was better. Now I think it isn't as good. 5 Star system+number of votes+review, bookmarks and install counter is still the best system and the most informative.

Install system should include updates as then it will reflect a more true value of consistent active users on first look. An un-installed script will never get updated and that can have a huge impact on the install count.

Currently I do not use the rating system to gauge the value of the script. I find it a misguidance to the scripts value.

§
Pubblicato: 24/10/2014

Install system should include updates as then it will reflect a more true value of consistent active users on first look. An un-installed script will never get updated and that can have a huge impact on the install count.

Update counts should be excluded in my mind. Because different script host may check updates in different frequency. As the result, scripts for some script host may get greater update count beside scripts for another script host.

---

I think you should just show the number of people who have favourited the script, and just have a simple 5 start rating somewhere where they can click on a star from 1 star to 5 stars. People seem to understand that sort of rating system perfectly. No need to over-complicate things with a load of mathematics...

+2
And you can count a 5-stars-rate as 1-Good and count a 4-stars-rate as 3/4-Good-and-1/4-Bad. The current ranking system can still work. The ranking system can be used for sorting scripts in search page or select some scripts for main page show. But displaying an AVERAGE stars-rate, the details count of N-stars-rate, and, the favourite count to user is more understandable.

---

BTW, what about changing "add to favourite" option to a distinct "Like" button?

§
Pubblicato: 24/10/2014
Saying that, KoC script should have a much higher value than my script. It is without a doubt a more popular script.

But it's only popular among players of the game, so popularity doesn't really mean anything. Their only value to this site would be the number of visitors they bring to the site.

§
Pubblicato: 24/10/2014
But it's only popular among players of the game, so popularity doesn't really mean anything.

That goes for every other script also. When I want a Pixiv script I would not give a cent for any other higher rated script that are not Pixiv scripts.

§
Pubblicato: 27/10/2014

Ugh. I'm only reading this discussion as I noticed the rating beside my scripts and couldn't work out what it meant. Usually a 5 would be a 5/5. But then some scripts had 20 something.
So most of my scripts being 5 out of at least 20-something had been deemed bad? Mouse over text was so completely unhelpful it shouldn't be there. It should explain what the number means.
The user should be given the range of possible values. As in possible values range from 0-100?

As it is the rating is meaningless. It is not obvious what the rating number means.
Also reading the above it seems needlessly confusing. Does any other site bother with such a system?

A default of 5? It's too easy to read that as 5/100.
It's like every such script has been given a bad review.

Userscripts.org had a better system. 1-5 stars.

Having said that. As for ratings. I like the idea. Though mostly what I look for is the 1/5 star reviews. As in why shouldn't I install this thing. Is it malware? Is it broken or incompatible? Current system you have doesn't seem to let me do this easily.

§
Pubblicato: 27/10/2014

Reddit, Yelp, and Digg all reportedly use the same kind of calculation. The difference may be that they use the number for sorting, but display another calculation. Reddit, for example, shows upvotes minus downvotes, but that doesn't match up with how things are ordered.

§
Pubblicato: 28/10/2014
Reddit, Yelp, and Digg all reportedly use the same kind of calculation. The difference may be that they use the number for sorting, but display another calculation. Reddit, for example, shows upvotes minus downvotes, but that doesn't match up with how things are ordered.

I agree to sort scripts not based on voteUp/voteDown but some function f(voteUp, voteDown). But I don't think '"vote up" is as same as "Feedback". And I prefer to split "vote up/down" from "feedback".

wOxxOmMod
§
Pubblicato: 30/10/2014
Modificato: 30/10/2014
Reddit, Yelp, and Digg all reportedly use the same kind of calculation[...]

I find it extremely difficult to understand the logic behind referencing these sites which I deem as ultimately inappropriate models for a script database and a script one-stop-shop such as GF. Or, "to put it unmildly", it seems nonsensical. Although personally I don't care about ratings, not even a millibit, but it'd be nice to believe that by 'sensible', 'logical', 'indicative', 'helpful' we mean the same things because otherwise the current rating system is more of a troll exercise (oh, here's a daily victim) than anything else (especially since reddit is mentioned). Well, I'm kinda just teasing you, Jason, but there's a grain of truth in every joke, as they say...

§
Pubblicato: 30/10/2014

Well, they're all sites full of stuff that people theoretically visit/use/read and then provide their opinion on, so I don't how it's inappropriate here. If you don't care about ratings as a concept, well I guess you can ignore them. It's just one piece of data among others.

My current plan is:
-Rewrite the OK description to "sound" worse (e.g. "significant problems")
-Rewrite the "question/comment" option to include things like feature requests and bug reports
-Get the forum localized and localize the rating options
-When a user is replying to a review thread they started, give them the option to change their rating
-If a supportURL is defined, make that option more prominent
-Switch the score display to show something like "X up, Y down" while still sorting based on the current calculation

wOxxOmMod
§
Pubblicato: 30/10/2014
Modificato: 30/10/2014

Your roadmap sounds very promising.

P.S. I'll gladly admit myself wrong in the future if everything you plan will work out smoothly but just to clarify my previous sceptical message: those sites deal with general information which is perceived subjectively by definition. However a script database is something quite different. Also the negative feedback mostly contains issues that are resolved thus making it wrong to use them as a rating source.

§
Pubblicato: 31/10/2014

Thanks for sharing the road map.

P.S. I'll gladly admit myself wrong in the future if everything you plan will work out smoothly but just to clarify my previous sceptical message: those sites deal with general information which is perceived subjectively by definition. However a script database is something quite different. Also the negative feedback mostly contains issues that are resolved thus making it wrong to use them as a rating source.

+1

Pubblica risposta

Accedi per pubblicare una risposta.