So, I'm about three quarters of the way through Warcraft III, but I'm in desperate need of a break, and, in addition, I've got my first big English paper (nearly 20% of my grade!) due in four days. What this means for you is that I won't have a review up for a little while yet, so I decided to post a short (it's me, so you know that's a lie and that this is going to be really long) editorial to help tide you over, in case my Jing: King of Bandits review wasn't enough. This editorial is about the inflation of review scores, the "cursed 7/10," and a few ways to deal with them.
|I feel like this sometimes...|
To start off, let's talk about rating inflation. I don't know exactly when the 10 out of 10 rating system was applied to games and anime, but since their installation, things have changed a little. Much like the U.S. Dollar (as many of us are all too keenly aware), review scores have become inflated over the years. I actually attribute some of this to the rise of bloggers and review sites that are not manned by so-called "professional critics." This is because review sites and blogs (especially blogs) are run on people's opinions and likes. Here's a hypothetical (but often true) example: a blogger, let's call him Jim, really likes an anime which we'll call (for example purposes) "ABC," and then he posts a review for it on his blog, JohnJacobsJingleheimenschmidt (give yourself a high five if you get that reference). His review is probably fairly brief and says "I liked this, this, and this about ABC," and he gives it a 10/10 because he likes it so much. What James didn't realize is that despite all the good things about ABC, it has (just as an example) terrible plot construction, which makes it unworthy of the 10/10, according to the "non-inflated criteria." What Jim just did was create score inflation; he rated something higher, perhaps far higher, than it should have been. Now, even if Jim is not a very prominent blogger, he has just lowered the standards for what makes something a 10/10. They were only lowered a tiny bit, but they were lowered nonetheless. Perhaps a more relevant example would be if Jim had reviewed ABC and gave it a 7/10, saying that he recognized it the plot construction was poor, but saying the good outweighed the bad. So, just for examples' sake, let's say that the plot construction was so terrible that Jim was wrong, and that ABC deserved to be a 5/10. This is a much more common type of inflation, but it is inflation nonetheless. Now, I'd like to point out that there is nothing, nothing, wrong with Jim giving ABC a 7/10, or even a 10/10. He is a blogger, and the whole point of blogs is to express opinion. Jim didn't do anything wrong, per se, he was just expressing his opinion. The problem, though, is that Jim inflated review scores for everyone, not just himself. Now, let's say blogger Mark posts a review of the show XYZ on his blog, MarkyMark44, and he gives that show a 7/10. Let's suppose that XYZ is a show truly deserving the 7/10 score. However, people who see Mark's 7/10 score will say "Oh, it's just a 7/10, that's the same score that stupid show ABC got. It must not be any better." And that is the problem with Jim posting that 7/10 review. Now, XYZ is considered just as bad as ABC, despite Mark wanting to communicate that the show was actually pretty good. It's Jim's fault, but you can't blame him because he didn't do anything wrong. He simply shared his opinions, and who can blame him for that? This example showcases the main reason why score inflation has grown so much (to the point where a 7/10, instead of a 5/10, is considered average). A non-professional critic posted a review with a score that was too high, and as a result, he made all scores worth a little less, at least in the minds of the readers. Now, I'm not trying to say that a "professional" critic would do any better. As a matter of fact, whenever I hear the word "professional" applied to critics, I tend to get an ugly look of disgust on my face (that, however, is another post for another time). What I am trying to say is that a "well-rounded critic who rates based on criteria," like blogger Mark, probably wouldn't give ratings that caused inflation. The only reason I chose bloggers for the example was because they, generally speaking, write reviews based more on opinion and not criteria. Other reviewers are certainly capable of making inflated ratings as well, I just chose blogs because they more commonly give such reviews. Onto another section, which discusses one of the reasons why a 7/10 is considered "average."
|My reaction when I see a hugely inflated score.|
100 = A+
90-99 = A
80-89 = B
70-79 = C
60-69 = D
1-59 = F
This is a problem for reviewers, because people will naturally try to apply the same system to any rating scale of 10 or 100 (because the difference between a 10/10 scale and a 100/100 scale is paper thin). The result is something like this:
10/10 = Masterpiece, perfect, amazing, go watch/play it now (in other words, A+)
9/10 = Great show/game (A)
8/10 = Good show/game (B)
7/10 = Average show/game (C)
6/10 = Poor show/game (D)
1-5/10 = Failure of a show/game (F)
I have a big problem with this. Why? Well, I'm not going to challenge the grading system of schools, but a rating scale that ignores literally half the scale as "failure" is, at least in my opinion, both foolish and wasteful. You may as well have a 6/6 scale, because everything from 5 down is a failure anyways. However, I'll hold off on my opinions for a moment yet. What all this means if that people start associating grades with review scores. This results in the "average," the "passing grade," the "C," being a 7/10. This is the other reason that a 7/10 is average. Surprisingly, though, there is one last reason that 7/10's are average.
|I shouldn't need to tell you what I reserve this face for.|
|The desolate, rocky landscape that those who only go for higher-than-average will find themselves in.|
So now, we can finally talk about how all this relates to me. All this score inflation has left those like me, the people who dislike this score inflation, in quite a sticky situation. We want the 10/10 scale to look something like this:
10/10 = Masterpiece, perfect, go watch/play this right now
9/10 = Excellent product
8/10 = Great product
7/10 = Good product
6/10 = Above average product
5/10 = Average product
4/10 = Below average product
3/10 = Bad product
2/10 = Really bad product
1/10 = Terrible product, awful, the worst, avoid this show/game at all costs.
However, due to the inflation of scores across the internet, using this is extremely difficult. Here are a few solutions to this problem.
- Use a non-inflated 10/10 scale, and make your rating system very clear.
- Use a 10/10 scale, but rate using different criteria, so the ratings don't mean the same thing (e.g. rate based on how much you personally liked it).
- Don't use any kind of "overall" rating at all, but instead just let your review speak for itself.
- Don't use a 10/10 scale, but instead use something like Pros/Cons.
I'll go through the ups and downs each of these styles offers, starting with my own.
|The abandoned village that is the unadjusted 10/10 scale.|
|The alternative 10/10 scale is like being alone at sea; survival is difficult, but at least you'll never be crowded by others.|
Let's change examples. Let's say you use a 10/10 based rating system that rates shows on how effective they are [at appealing to their audience, tugging the viewer's heart strings, whatever]. In this case, you still have to make very clear what kind of a system you're using. If someone unfamiliar with your blog sees a low rating for a good show that you reviewed, they will assume that your low rating was used to talk about the quality of the show and not about how poorly the show used symbolism (or whatever). That reader will then go "WTF?" and either bash you about it, leave unsatisfied, or actually read the review and see what you're talking about. The first and second options are the much more common ones. This is why you absolutely have to make your review system clear, even more so than with my method. Of course, there are advantages to this system. For one, if it works, then you've forever freed yourself from the shackles of other people's rating systems. Your reviews will always be unique, and uniqueness on the internet is difficult to attain, indeed. This also lets you appeal to a larger reader base, because you're different from everyone else and have something only you can offer.
|A style that uses no ratings is like climbing a mountain. Uphill until you reach the peak (i.e. get a dedicated reader base), then everything is downhill (easier) after that.|
|An alternative-to-ratings system is a hybrid: impossible to find a picture for, so just enjoy this pleasant screencap.|
In the end, though, there's no easy way to make an alternative rating system work. Overcoming the inflated scores problem is a difficult matter, and there's no easy answer. So there, that's part one of my first editorial (and it may very well be my last, but we'll see). I hoped you enjoyed it and will enjoy the rest of the stuff on my blog! Thanks for reading!
Read part two here
What do you think is the best way to overcome score inflation? Do you think it's a problem in the first place? Share your thoughts!