Fair Game: Review Scores are Redundant

Has Stevie hit the nail on the head?

A few weeks ago, review aggregation site Metacritic came under fire from disgruntled community members of NeoGaf and N4G after it posted an 8/10 summation from Variety magazine for Sony Computer Entertainment's potential PlayStation 3 killer app LittleBigPlanet.

While an undeniably fine points score, resulting fanboy accusations claimed Metacritic was guilty of favouring Sony's rivals by deliberately attempting to downgrade LittleBigPlanet's gathering acclaim. These knee-jerk allegations (placing a focus on "jerk") arose because Variety's respected entertainment reviews, be they for movies, television or videogames, do not carry scores.

Those choosing to look beyond idiotic conspiracy theories, which largely involved fat cheques and Microsoft's heavily-populated back pocket, may have had any lingering questions about the Variety review answered by an explanatory forum post offered up through Metacritic founder Marc Doyle. In his post, Doyle highlighted site policy by pointing out that videogame reviews submitted to Metacritic without accompanying scores are assigned one prior to online posting based solely on the tone of the review text.

However, despite Metacritic's explanation, which does appear to support its 8/10 evaluation, it does still seem fair to question the review score because it has not come from the source publication. This, in turn, begs the question: do we actually need scores and, if so, who needs them more?

While the likes of Metacritic and Game Rankings subsist on review scores and would be struck hard without them, when it comes to the game-playing public - the online demographic most likely to be trawling through individual reviews and collated scores - it would be difficult to find a single gamer who isn't guilty of dismissing a reviewer's worded appraisal in favour of skipping straight to its easily scanned concluding paragraph and points score.

Short attention spans aside, this general lack of respect for the reviewing process (something we do for you) could be attributed to an unwillingness to actually 'read' a review when many prominent review destinations offer quick-fix video appraisals to sate curiosity in the quickest possible time. Yet it could also be the result of gaping discrepancies in consistent and reliable videogame evaluation.

Specifically, GameTrailers recently ran its video review for survival horror epic Dead Space, during which the reviewing narrator spent several minutes espousing much praise in the direction of creator Electronic Arts before then awarded the game a score of 8.8/10. Similar to the LittleBigPlanet incident, site comments swiftly questioned why the final score had not been higher considering the reviewer's clearly ebullient gushing.

Now, without getting into the pathetic decimal and incremental scoring used through the respective 1/10 and 1/100 scales, there is clearly a growing tumour of belief across vast sways of today's gamers that 'good' releases should be scoring 9/10 by default. Wonderfully malignant as an ominous fanboy device designed to push one hardware platform's exclusives further than those of a rival, such a damaging yardstick of measurement is only likely to further consume the reputation assigned to a dubious scoring system already in danger of becoming completely inconsequential.

Clambering boldly atop my well-trodden soap box, I can say with concrete conviction that, after reviewing videogames for numerous publications over the past five years, scoring systems differ radically from place to place. It is my belief that they are not only guilty of being unreliable and misleading for the reader, but also of being wholly unnecessary when it comes to delivering solid objective appraisal.

Bottom line, it's how the writer conveys his or her words that counts, not how many points they choose to slap on at the end of the review; points that the reader will invariably misconstrue or criticise because everyone seems to have their own notion of point value. Once the reader passes the final full stop, internalisation of the writer's thoughts should then contribute to what is supposed to be 'an informed' decision as to whether the game in question is worthy of their time and money. An arbitrary score that is so obviously open to interpretation and vulnerable to dispute does anything but help in this sense.

For example, although a minority of review sites appear to rightly outline a system where 5/10 is deemed average, a great many seem to be keen on pandering to PR companies and publishers by using 7/10 - while others simply don't have a defined system at all. The latter reliance, whereby reviewers are left to follow their own paths of evaluation with no rigid guidelines, results in massive contradictions when it comes to score continuity, which is only likely to damage a publication's overall reputation.

And reputation plays a large part in building a faithful audience that's prepared to have faith in what your reviewers have to say. For evidence of this one need only look at unflappably stringent and highly respected UK publication EDGE magazine. Here, a score of 5/10 is absolutely average, a 7/10 is clearly hard-earned praise, 8/10 is nothing short of impressive, 9/10 is an uncommon mark of outstanding achievement, and 10/10 is a blessed rarity that bestows a tangible sense of genuine greatness upon a game.

Conversely, however, despite EDGE's best efforts, one look at Metacritic's increasingly cramped pages shows just how many bottom-feeder review sources are being given a voice with which to almost nullify those prepared to exercise an attuned critical eye and offer scores accordingly.

While it's no surprise to regularly see review scores from gaming heavyweights such as IGN, GameSpot, Eurogamer and 1UP on Metacritic and Game Rankings, the supposedly reputable are quickly losing ground to the influence of half-baked publications that are little more than borderline blogs cooked up in a dingy bedroom.

These sources tend to swing between poorly-written fanboy drooling or biased platform bashing that's clearly penned by pre-pubescent, pro bono teens with a worryingly fanatical grip on their joysticks but not the English language. What's more, a solid 7/10 from EDGE or Eurogamer tends to carry far less weight for prospective consumers when viewed below 25 other scores - many of which are decimals away from perfection and make the review process a laughing stock.

Yet a seemingly respectable reputation can also work against the greater good of videogame reviewing - take GameSpot's recent run of unsavoury speed bumps for instance. For many, GameSpot has been the leading source of all things gaming for at least a decade, earning reader respect and industry renown for its unbiased opinions and review continuity. However, with great success comes great pressure, which began to tell when 1UP reported at the end of 2007 that Sony Computer Entertainment had actually complained about the (totally respectable) 7.5/10 tagged onto the review for high-profile PlayStation 3 exclusive Ratchet & Clank Future: Tools of Destruction.

And then there was the infamous 'Gerstmann Gate,' which saw GameSpot's long-serving Editorial Director, Jeff Gerstmann, allegedly fired by the publication after Eidos Interactive supposedly demanded changes to the clearly negative tone of his review for Kane & Lynch: Dead Men (which he awarded 6/10). An amended version later replaced Gerstmann's original, which labelled the game as "lazy and ugly," and various reports connected the fury of Eidos with substantial advertising dollars the publisher was channelling through GameSpot to push Kane & Lynch's release. Gerstmann left soon after and has since refused to comment on events leading up to his dismissal.

In the wake of Gerstmann's controversial firing, Valleywag posted a conversation with an anonymous GameSpot insider, during which the source said that Josh Larson, the publication's Executive Editor had "implied that "AAA" titles deserved more attention when they were being reviewed, which sounded to all of us that he was implying that they should get higher scores, especially since those titles are usually more highly advertised on our site."

Interestingly, fellow review heavy hitters Official Xbox Magazine and GameTrailers scored the game as 8/10, while 1UP scored it 7.5/10 and IGN and Eurogamer scored it as a 7/10. None of the aforementioned sites reported receiving publisher criticism for their scores. scored the game as 58/100, saying it was "impossible to recommend for anyone other than hardcore shooter fans willing to trawl through hours of mediocrity in return for minutes of unfulfilled promise." We didn't receive any complaints from Eidos; although, to be fair, we're a little fish in a big pond... and we weren't running a massive Kane & Lynch ad campaign at the time.

Of course, while I'm in no way suggesting these upper-tier sites exercised publisher-prompted favouritism through their Kane & Lynch reviews (although the game was clearly not deserving), there are surely instances of review sites assigning intentionally controversial scores to lure rabid fanboy traffic. This somewhat desperate practice of link baiting may well see gaming sites temporarily infused with a sudden traffic injection, but doesn't such action also leave review scores all-but worthless as an evaluation tool, standing only to incite a reactionary response from those intelligence-lite gamers blinkered by platform or title allegiance?

And, inflammatory or engineered scores notwithstanding, while review sites benefit from any positive or negative feedback attributed to their coverage (bad press is still press), it's surely the software publishers themselves that have the most to gain from the bloated and fractured points system that hangs like a weight around the neck of today's reviews.

In one of his recent Newsweek articles, resident games expert N'Gai Croal points to how several videogame publishers have publicly acknowledged the use of score averages on Metacritic and Game Rankings to help decide bonuses for development studios. He also refers to the considerable industry effect of such aggregation sites in the eyes of publishers, and adds that such sites "have rapidly become shorthand for product quality."

Citing a communication with an unnamed games journalist from the enthusiast press, Croal goes on to say how tensions between gaming publications and videogame publishers is at its most taut during the big-money Christmas build-up, which is typically the period when publishers roll out their high-stakes software releases. Croal's piece suggests unsatisfactory reviews during this time often leads to publishers honing in on those scores they believe will damage a game's average - and invariably their profit margins.

A poor Metacritic or Game Rankings average is almost certain to impact a game's commercial reception at such a critical time on the retail calendar. And, with scores from aggregation sites regularly quoted through game blogs, mainstream news stories, and even the reports of industry analysts, what would the removal of review scores do to anxious publishers increasingly reliant on their influence to drum up sales? Indeed, while publishers stand to gain the most from the existence of imbalanced, biased and unreliable scores, they clearly also have the most to lose should the games media decide to abandon them.

Sadly, however, while only the cerebrally challenged will actually miss review scores, the games media isn't likely to cast the scoring process into oblivion any time soon. Its lack of continuity and persistent divisiveness makes it a hugely beneficial tool, which is only compounded (intentionally or unintentionally) by differing individual site guidelines that propel the decidedly mediocre onto the shoulders of those occasional software giants.

And, while running the risk of being spanked by the editor, I shall stand on tiptoe here atop my soap box and say, for the record, that I believe should drop its review score policy. I will also be brave enough to tar it as one of those sites guilty of lapse continuity that often prompts deserving backlash from its readers. Personally, my average is 5/10, but I cannot speak for all the reviewers that contribute to's existence. While that fact is surely reason enough to abandon the system, doing so would place more focus on the site's actual content and place more impetus on the reader to make up their own mind based on reading, absorbing and actually evaluating the worth of the written word.

And so what if Metacritic then has to assess and assign a third-party score to all of's future review submissions. Given the current state of scoring across games publications, the concluding number would be just as worthless either way.

E3 Trailer