Interview

Marc Doyle on the Metacritic Effect

Behind the scenes with the site's founder

For many, Metacritic is just a quick, digestible way of getting a cross-section of critical opinion on new releases. For others, it is harming the credibility of game reviews themselves and holding dangerous sway with publishers. We sat down with the site's founder, Marc Doyle, to get his views on the site and the recent controversies surrounding it (readers may also be interested in Stevie's article on the subject).

Thanks for speaking with us Marc, what is your role at Metacritic?

I founded Metacritic with my partners Jason Dietz and Julie Roberts in 2001. I'm currently Metacritic's Games Editor and the Senior Product Manager for the site as a whole, which is now owned by CBS Interactive.

What was the original vision for Metacritic, and how has the site changed over the years?

The initial spark for Metacritic was ignited by my law school classmate Jason Dietz, who had created several websites during the internet's infancy, including "The List of Possible Band Names" which was amazing. He approached me and we developed our mission for the site which really has not changed since our initial discussions in 1999. Before the rise of the world wide web, consumers were at the mercy of their local critics for advice about which movies to see, what games to buy, etc. Furthermore, the influence of advertising campaigns and the emphasis on fawning quotations from obscure critics that nobody had ever heard of in newspaper/magazine ads was huge. Bad movies and games could be thrust on consumers without a great deal of education to rebut the messages from PR companies or the potential biases of individual critics. Metacritic's mission is to bring together the most professional, skilled and respected critics in each section of our site (movies, games, music, and TV) to provide our users with the most reliable indicators of quality upon which they can base their purchasing decisions. Again, this type of service would not have been possible before the web was developed. Our individual editors spend a great deal of time researching their individual industries to stay abreast of who the most reliable critics are.

How does Metacritic assess and score those review submissions arriving without a source points score?

All product pages on Metacritic state the following: "All critic scores are converted to a 100-point scale. If a critic does not indicate a score, we assign a score based on the general impression given by the text of the review." The following is from a forum posting of mine: In other sections of Metacritic, we regularly track publications that do not assign a score to their own reviews (LA Times, NY Times, Hollywood Reporter, Variety), and we've always estimated the scores based on the impression gleaned from the review. I've only very rarely taken this approach in the Games section of Metacritic because gamers and the games industry are so sensitive to our scoring system and process. However, I did track the New York Times game reviews for years - a site which had no scores. In that case, the lead critic from the Times, Charles Herold, emailed me his unpublished scores every time the NYT published a review and I used those numbers on Metacritic (See the scoreless New York Times review of The Orange Box by Charles Herold which appears on Metacritic's Orange Box page with an 88 score). My argument is that if the score comes from the horse's mouth, from the critic him or herself, so that I know the critic's intent definitively, I am satisfied that I can maintain 100% accuracy (incidentally, we stopped tracking the NY Times when they laid off their veteran lead critic a few months back).

So with respect to Variety (a recent addition to our games publication lineup), I've come to the same arrangement with their lead editor. Each time a game is reviewed, he sends me the link to the review along with their unpublished 0 - 100 score - a score which I post at Metacritic without alteration or personal input. I've vetted a great many reviews and scores from Variety's team before deciding to pick them up like I do with every website and magazine that I decide to track.

Who is charged with that process, and how are they qualified in terms of industry experience (does MC have editorial staff)?

Each editor and his/her staff is in charge of assigning scores. Our team members have read tens of thousands of reviews (often hundreds of thousands) and we've honed our skills in assigning scores. However, in our movies section in particular, most of the critics we track have kept up a running dialogue with us, so that if we're "off" in the scores we've applied, they'll let us know about it, and we'll adjust the scores accordingly. Again, we don't do this in the games section - we simply receive the scores directly from the critics themselves (see Variety).

Do you believe that many review scores are misleading, given that scoring systems differ wildly from publication to publication?

No, I don't for the most part. With respect to publications which assign scores from the A - F scale, there are many schools of thought on how the letter grades should be converted to the 0 - 100 scale. Metacritic utilizes a principled conversion scale, equating F to Zero, the same way we do with the lowest scores from every other grading scale (4 stars, 5 stars, 0 - 10 points, 0 - 100 points, etc). Game Rankings, another aggregator, uses a different scale, and most schools use a radically different conversion, with "F" equating to approximately 58 (at least in United States schools). We can talk for hours about exactly why publications use the A - F scale rather than the more specific and simple number or star scales, but that can be fodder for a future article. But assigning an "F" review a 58 score makes very little sense to me. The important point is that Metacritic has used the same conversion scale since we launched in 2001, and every game, movie, TV show, and album has been treated exactly the same way with respect to the A - F conversion. The conversion chart is listed on our About Metascores page.

Publishers like EA have revealed previously that MC averages are at the heart of their planning/evaluation. Are you proud of this role, or is there pressure associated with it?

I'm proud that all kinds of users have found our Metascores to be useful in making their purchasing decisions, and more generally, as a dependable indicator of a product's quality. If game publishers find our Metascores useful in their attempts to improve the quality of the games they create, that also certainly makes me proud. But no, there is absolutely no pressure associated with the industry's use of our numbers. Our team is a passionate group, and we'd be dong our best to put out the best site possible regardless of industry attention. Publishers, developers, and PR firms certainly send me a barrage of questions on a daily basis ("why aren't you tracking X and Y publications which gave our game a 90 score", etc), and they let us know when we've made background information mistakes or inadvertently omitted scores from publications we regularly track (in the same way that regular users do). But I've never felt any pressure to add or drop a publication or individual review from any outside company or from Metacritic's parent company, which is certainly refreshing.

You've been taking measures to address the recent user score problem at Metacritic, but does this compound the notion that readers view scores as little more than weapons to beat fellow fanboys with?

I really don't think so. In the last week, we stopped allowing users to review games before their release dates, which was a major cause of the "anticipatory" or "enthusiasm" voting for games that users clearly haven't played. We will also make our user registration system much more robust when our site is redesigned, which will make "stuffing the ballot box" more difficult. But this problem with user voting is not unique to Metacritic. Such respected and larger sites as Amazon and IMDb have experienced this issue many times over the years. The important point to keep in mind with respect to Metacritic is that our featured product is the Metascore, which is the weighted average of the individual scores of the leading professional critics in each industry. We provide the user review system as a service to our users, but it is not as important to Metacritic's mission as the user scores are at Amazon and IMDb, for example. But we'll do our best to clean up this issue and to confront similar problems of this nature as they arise.

Would you consider dropping user scoring following the recent issues entirely and what's changing on the user content side of things overall?

I certainly don't think so.

Why does Metacritic, an English language service, run English review summations that are linked to foreign language review sites?

Because The World Is Flat, as Thomas Friedman says, and video games are a global passion. There are many highly-respected, reliable, and prolific critical publications in such countries as Spain, Portugal, Germany, Austria, Holland, The Czech Republic, Sweden, Finland, among many others. I name these because Metacritic currently tracks publications from each. As long as I am able to assess the publication's quality and can provide English language excerpt translations to our users, I see no reason why I shouldn't share the insights from these countries with Metacritic's user base, which is certainly global.

What process do sites go through before being included on Metacritic?

Evaluating new publications is probably the most important part of my job. I ask publications who request inclusion to provide me with quite a bit of background information. For example, the amount of time its staff has been reviewing games, the nature of its leadership and staff, its scoring system, its review rate, its traffic/circulation, etc. I'll also carefully review the publication as a whole and read a great many of its reviews to evaluate its writing quality, the foundation for its conclusions, and the soundness of its scoring. I'll discuss the publication with colleagues and conduct research to assess the publication's reputation for reliability and professionalism within the gaming community. I generally track a publication's progress for many months (and often years) before making any decisions on inclusion. Again, I'm not interested in tracking every website which is reviewing games. If a publication hasn't established its credibility, I won't track it.

Lastly, where else can we expect Metacritic to go in the future, and do you see more controversy on the horizon as the site's influence grows?

I can't discuss specifics regarding where Metacritic will go in the future because CBS is a publicly traded company, but no, I don't foresee more controversy as the site grows. The role of the individual critic has historically been controversial, so we, in highlighting the contributions of critics, expect to be a part of that overall conversation. But as long as our very passionate team is associated with Metacritic, we'll make sure that our processes are sound and principled, and we'll continue to be responsible caretakers of our scoring system.

Thanks for your time, Marc.

Thanks for the opportunity!