The power of the press, eh? Tom Chick’s Homefront review, which he’s writing for Gameshark, will be posted on Friday but the general consensus so far is that the game isn’t quite as good as we’d hoped. THQ sunk a lot of money into this game, and top level THQ reps spoke of it as a real competitor to Call of Duty and Battlefield. So far the critical reception has been less than glowing.
That said, Homefront pre-orders were through the roof, according to THQ, and it’s not like this is the first high profile game to receive middling review scores. Currently Homefront sits at the completely arbitrary “71” on Metacritic but according to a story from Reuters, this is the reason for the 9% stock drop.
Then there’s this:
“The first-person shooter game “Homefront” received a score of 75 on Metacritic, a website that tracks reviews of games. The market leader in war-themed games is Activision Blizzard’s (ATVI.O) “Call of Duty: Black Ops,” which has a score of 88 on the website. “This score is a bit of a disaster for THQ and the share price today is reflecting that,” said Janco Partners analyst Mike Hickey. “The market is a quality driven market (and) you need at least a score of 80 and above on Metacritic to do well.”
Even Reuters and “Janco Partners analyst Mike Hickey”, who somehow is under the impression that this is a “quality driven market” even though I can show Mike about 1,000 cases of where that’s not true, cite Metacritic scores as if they’re actually tangible and every time I see that my stomach starts to hurt and I need to get up from my PC and take the dogs outside, get some air, and hug my family for reassurance that the entire world in which I work hasn’t lost its collective mind. If, and this is a huge assumption, that the Metacritic score is the reason THQ’s stock dropped, then we really need to start looking at how this industry functions from a criticism standpoint because if it’s THAT big of a deal and we’re using arbitrary review numbers from an aggregate website to determine stock prices and, thus jobs and livelihoods, then we’re deeper down the rabbit hole than I ever imagined.
Seriously, why do people put so much wait into an aggregate scoring system that has no normalization process between the various outlets its pulling its data from. I get its a decent place to go and say, ‘hey I want to know what reviews are out about this game.” However, to focus so much on the metacrtic score is just silly. It so far removed form the reviewing process and so general that it loses any actual context. Hmm…maybe thats why the business end loves it.
Stomach churning indeed!
Though not surprising. If you consider the temper-tantrum-oriented nature of the stock market and provide an “easy to use” indicator of product quality like Metacritic… those two getting together sound like an aggregate loss.
Video game news outlets should figure out how to game the system. Obviously Metacritic has become too powerful. Maybe it could be rendered irrelevant if outlets gave two scores, their metacritic score and then a score for people who actually read the review. This could be as simple as the last line of the review being a candid one sentence summation.
Now, that may sound dishonest, but I think everyone wins. Players who care would click through to the review itself, as they do already, and be rewarded with the truth. And who gives a shit about the players who buy a game based on a number? They’re aren’t giving your site page views, and they probably aren’t discriminating enough to tell the difference between a COD and a Medal of Honor anyway.
Unless there’s some kind of grass roots effort to subvert Metacritic and publisher’s reliance on it, I think its influence will only continue to grow.
Stock market is a crazy thing any rumor or news can make a stock drop or rise. If Homefront sells well regardless of critics the stock will rise, but for now all they have is the opinion of critics. There ins’t to my knowledge a Metacritic for tech reviews but if the IPad2 was slammed by reviews as poor device Apple stock would drop, and if the device sold well regardless the stock would rise.
THQ made a very big deal of Homefront early poor review may imply that the game won’t do well on the long run but if sales prove otherwise they will be fine.
You may not like Metacritic but you cannot say that 71 score isn’t indicative of the general felling of reviews, it’s not a scientific score, it’s not an accurate assessment of a game quality, it’s an indication of what most critics think about the game. The critics and the gamers are usually not that far apart from each other, of course there are always exceptions, but the correlation is certainly there for these type of games.
Now may be a good time to buy THQ stock. If most of the pre orders sell and a few more get thrown on top that stock could go right back up. Could be a bit of easy money.
Also, Mister Hickey should stick to markets he understands. If he thinks metacritic is the be all and end all, he’s quite frankly undereducated for his job.
Also, how does a 13 percent difference divide “Market leader” and “bit of a disaster”? What the hell happened to useing some goddamn logic?
Sites like metacritic are just an easy way for analysts working at institutional investors that are exposed to the video games industry to gauge the wind so to speak. The news article is dumbing down the idea, but just remember that these guys are making quick decisions on how to measure which way THQ’s earnings are going to go.
You have to remember that the investing in markets is as much as a game of expectations as anything else, and theoretically, stock prices reflect expectations and future income in addition to current earnings. THQ has hyped Homefront as their next big franchise to fight with CoD and Medal of Honor for the last year or so, and apparently there were investors who believed enough in the management and their developers, which ultimately drived the stock price up.
Now that Homefront is out, and it started to receive average reviews, a lot of those investors, who believed that Homefront was going to be a AAA, COD level type of game in terms of popularity, are re-evaluating their decisions which is what is causing the drop. If you don’t act first in the market, you’re going to lose out, so a lot of these institutions are racing to evaluate their exposure to THQ.
THQ will now need to prove that Homefront, even with its average reviews (not bad, just average), is still a commercially viable series to its investors. If the sales figures come out at the end of the month, and Homefront still sold like gangbusters, you can bet on seeing a double digit rise once again.
pyjamarama here’s the issue, as I see it anyway.
At the time of that article, the MC score was “75″
The question, then, is what does that score really mean?
Sites like 1Up and GameShark or any other site that uses letter grades — like Game Revolution– a 75 is actually a “B” grade.
What’s wrong with a B? Well, according to that analysis, a B may as well be a stake through the heart. Game Rev gave it an “83″ which on this scale is a B+. What if GR meant for it to score an 88? Which to me says B+ more than 83 does. Wouldn’t that alter the score?
But the analysts (and PR) don’t SEE 75 as a “B” they see a 75 what we all see a 75 to mean — like a school grade. A 75, to almost everyone, immediately screams “C” or “really not that good” when that may not be at ALL what the reviewer intended to say but MC says that for them.
What if you use a 5 star system?
A 3/5 = a 60. A 3 star review doesn’t MEAN “this game is CRAP” but that’s how people read it. It’s completely off the rails. a 60? People see that as a failing grade. It’s just natural.
When you have multiple websites using multiple grading scales and then one website throws all those scores into a blender and pops out some arbitrary number and then industry analysts use that number to determine a game’s overall appeal — is Krazy.
I like Metacritic, I really do, I think it’s a good tool — but it’s just that — a tool and when people use it as the be all end all for critical reception, it’s just plain faulty data. Bad science.
Metacritic will never, ever, EVER be properly indicative for one simple reason.
Some reviewers treat 50% as “Good”. Some treat 80% as “good”.
If five of the 50 goods rate a game 60, and five of the 80 goods rate a different game 70, that means the below good game has a higher score.
Also, if four people rate it a 50 percent, and another rates it 100 percent because they give all above average games 100 (Yes, some “professional” reviewers do) that’ll screw the rating entirely. It’s a 50 game. Some guy just happened to go nuts over it. Hell, some reviewers may even have scores skewed by free swag they recieve.
Unless it is enforced that every reviewer publish a review for EVERY game on there, their numbers are skewed and worthless. A few 50 goods in one review base and not in another review base will skew results. It’ll never be indicative of the review base because the whole review base doesn’t review every game.
Quite simply, metacritic has no quality control. The aggregated rating they give is worthless. You have to go through and read the reviews.
I would agree that Metacritc could do a better job translating non standard scales, but because most outlets use the 10 point scale or 20 point scale in the end those outliers won’t have that great of an impact on the average score, when there are 50+ reviews accounted for, but they can impact in the beginning when there are less then 10 reviews. Now you can complain to Metacritic but my guess is that they just rather you changed for a 10 point scale rather then they changed the algorithm.
My guess is that PR will give crap even when they know that there really isn’t your fault and there isn’t a real impact on the score just because they think they may influence your opinion for the next game. Also they may operate on the assumption that Metacritic is not a prediction of sales but an influencer of sales. That in my view is wrong
As for the analyst they deal with numbers I’m fairly sure that for these type of games there is a correlation of metacritic and sales numbers, with outliers of course, Metacritic is the prediction , sales number will proven then wrong or right. If they feel that metacritic is not being accurate they can do there internal analysis and correct it without saying a thing, the fact that no one speaks against it says that metacrict is accurate enough in their view.
.
Although I agree with what you say and the need of business to help quantify market expectations, it hardly justifies metacritic as a legitimate source of data. Especially considering how much personal influence metacritic has over translating non numerical scores (4 out of 5 stars, 6 out of 9 burritos, A+) into a numerical value that they alone decide and consider fair. In fact, we heard earlier, that this has caused friction between reviewer and game devs when meta critic translated a score into a numerical value the reviewer did not give and the dev did not like.
Further more, what you are seeing is a very general snap shot of a much more wider and complicated market. I personally feel that this is very dangerous considering that major business decisions will be made of such wide generalizations. The people who makring these calls owe to themselves, co-workers, and shareholders to take a much closer look before they make a call.
Add to that the fact that a 75 is a “Green” score at MC (barely). That means Metacritic rates the game as a pretty safe buy or, in their words, “Generally Favorable Reviews.”
The one thing I have not gotten from the Homefront reviews is: “Why is CoD:Black Ops an 87 and Homefront a 71?”
Every criticism of Homefront I’ve seen so far could easily be leveled at Black Ops. Except that Black Ops went into a special area of awful with its zombie mode and that White House level.
We as a media consuming society have declined from “I’ll read the back of the box (late 70′s)”, to “I’ll read a review about it first (80′s)”, to “What score did the reviewer give it (90′s)”, to “I can’t be bothered to think for myself, what did Metacritic give it (’00+)”.
But don’t get me wrong I use Metacritic like everyone else. But I pay no attention to the scores. Metacritic to me is just a stoplight. If a product has a “green” score I’ll investigate the purchase further. I have found it as a great resource for getting reviews of legacy games (before they changed the site, I hope this functionality returns).
People love Call of Duty. If it wasn’t called “Call of Duty” and just simply “A new IP called Black Ops” it’s probably be around 70-ish aswell.
People just like what they’re used to, and don’t go well with change. a new ip can be seen as change, so people acctually review them without rose tinted glasses of past games.
I’m pretty sure the article/analyst was oversimplifying their thought process. Metacritic is an easy to understand datapoint that they can quote someone on, but it doesn’t take metacritic to spend the 20 minutes to just read a bunch of reviews and get a sense of “this game is kind of okay.”
I really don’t think we should read this article as “Metacritic is the divining rod for THQ’s stock price.” It was a business article written for a non-gamer audience trying to explain the idea that the critical consensus was not so great to Homefront.
I just posted my impressions of it. It is _definitely_ worse than Black Ops on a number of levels.
I think it was you guys on one of the radio shows that brain-stormed about a RottenTomatoes type rating system for games instead. Quite honestly, either that or an ArsTechnica type buy/rent/skip (thus- green/yellow/red) scale might be the best alternative to pursue. I’m not entirely sure that solves the underlying problems at all, but one step at a time? Quite simply the kind of emphasis we’ve been putting on metacritic for such things has gotten out of hand, but unless there’s something easier and better to move towards, all of the complaints are pointless…for better or worse.
I use metacritic primarily as a way of getting a good sample of reviews from various places to then try and read. Using it as a stock market guide or a method of measuring success/failure milestones of a studio is missing a lot of steps by any measure. I suppose it shows their desire for quality is in the right place, but the measurement of that quality needs a bit of control and common sense. Until there’s something else to point to or some kind of shift in the industry hive-mind though, I doubt it gets better
This is getting to be such a tired cop-out. I have never played a Call of Duty game and even I can see the fallacy in what you are suggesting.
New entries in popular franchises often have naturally inflated review scores because the earlier games were good enough to attract a following, and the developers were smart and savvy enough to build upon what people enjoyed about the game instead of starting completely from scratch. The reviewer’s duty changes from simply “is this a good game?” to “is this a worthy successor? Does it build upon the last game in a meaningful way while maintaining the core of what people have come to expect from the franchise?” This is why you might see that coveted 80%+ score frequently assigned to sequels that don’t seem to do anything interesting that its predecessor’s didn’t already do. Should it be penalized for giving people what they want? If they didn’t want it anymore they’d stop buying it, so in Call of Duty’s case I think it’s safe to say they still do.
Now, looking at it from another angle, should sequels be penalized more heavily for failing to innovate? That’s debatable, and perhaps from there you’d have a more solid foundation for an argument (all this really accomplishes, though, is to further illustrate why review scores are such a Bad Thing). Don’t try to suggest that a new ip’s middling scores are the fault of its status as a new ip. There are plenty of examples of new franchises with higher scores than their contemporaries, and just as many of old franchises laid low.
What’s interesting about this last bit – to me, anyway – is that most of those old franchises often lose points for not delivering the expectations a fan might have based on his past experiences, and aren’t always properly rewarded for bringing intriguing new gameplay ideas to the table. Final Fantasy is a great example of this: it’s one of gaming’s most venerable franchises, and the last few entries have been criticized by fans and critics alike for experimenting with new gameplay. Do those gamers have a right to expect some familiarity? I’m not sure what the answer to that question is, but it’s the perfect illustration of why a new ip might be forgiven some inadequacies in a genre’s fundamentals if it brings something new and cool to the table, while an established ip might not score points for innovation if it fails to deliver on the staples fans have come to expect from the franchise.
I’m not sure these are unreasonable expectations, but it does further highlight the industry’s review score problem, particularly when aggregates like Metacritic are so frequently used to measure a game’s success.
“New entries in popular franchises often have naturally inflated review scores because the earlier games were good enough to attract a following, and the developers were smart and savvy enough to build upon what people enjoyed about the game instead of starting completely from scratch.”
How is this not the title giving a high score? People go in knowing EXACTLY what to expect, and there’s less a chance of people letting their mental vision of the game getting out of control and unachievable.
With a new ip, people can think what they want, and I think it’s safe to say most of us allow our imagination to go nuts, and games rarely hit our expectations, if at all. Marketing has a part to blame for this, but it’s human anture to overestimate what we’re excited about, unless you’re a bit pessimistic or don’t get excited about anything.
Haveing your mental vision shattered puts a lot of people in a bad mood. The game caused it. Thus, I’d say it’s only fair that people would score it that little bit lower.
No matter which way you look at it, title plays a huge part in scores. How often do people refer to “The next CoD” rather than its acctual subtitle?
I’m not saying plenty of sequels don’t score less than their predecessor. However, I am saying that people look favourably on normality rather than “new” or “change”.
Case in point, Dragon Age 2. Mental image of second game. Second game didn’t meet mental image due to change. Image shattered, lots of people scoring excessively low. DA2 is still a good game. People just had their mental image shot to hell. New IPs have a better chance of doing this because we have no base point to work around but our own fantasies and some marketing spin doctoring.
I disagree with your assessment. This is not the first anti-metacritic article posted on nohighscores nor the first time nohighscore has voiced a disapproval about how the videogame industry views and uses metacritic to validate game worth. I believe the article is meant to prod at something this site feels is a much deeper problem within the game industry regarding scoring and the label of what is “successful”.
Your free to assume what you want, but by that regard I’m free to do likewise.
It’s not the title giving a high score because the game has to actually meet those basic expectations to earn the score. It’s just a completely different metric than the one new franchises are judged on. No, it is not at all fair, but it isn’t a great way to defend low scores for a game like Homefront, either.
Wild expectations are another serious problem with the industry, fueled in no small part by the gaming press. I don’t see how it relates to this issue, particularly since expectations tend to get out of control for established ips at least as often as new games (if a new franchise gets anywhere near the same hype/buzz, it’s almost always because it’s being launched by a proven developer, which is what happened with Homefront), and criticism borne from this attitude tends to be unbearably nitpicky. If anything, the argument seems to partially refute your main point about titles impacting scores, since an established ip has to live up to its name in every conceivable way to avoid this sort of backlash. Conversely, a new franchise could score points for executing an innovative gimmick really really well, even if it fails to deliver on the basics genre fans demand.
Anyway, I don’t mean to press the sins of the grander interwebs onto one individual, but it’s a common defensive maneuver to point to an established franchise in the same genre and say “why did this game get a _._ and new game x didn’t? No fair.” It’s a dead end; they’re measured in completely different ways…which is one of the many reasons why review scores just plain suck. I don’t think there is a fair way to do it. I would hate to see letter grades dry up because of something as stupid as the metacritic aggregate, and we seem to be headed that direction.
In fairness, if we all avoided arguing points with people because we’re afraid of singling them out, discussion would dry up, which is one of the points of comments sections on blog posts.
And I understand your case, but I think a name holds more weight towards getting positive reviews. I just feel people seem to beat down new IPs more often than sequals. Besides, more than likely the answer is a middle ground neither of us are ever going to hit. My opinion doesn’t make your invalid and vice versa, but if people didn’t discuss our viewpoints, I think conversation would dry up very fast.
Well, for what it’s worth, I *didn’t* avoid it, so I will choose to interpret that as a compliment. And neither did you, so it’s one I will return.
One quick fix? Give new ips a bit more breathing room on the metacritic average. Personally I find it easier to look past the score on a new franchise and…you know, actually rely on the meat of the review. I don’t think that’s uncommon, and it’s shortsighted for publishers and stockholders to hold them to the same standard. Call of Duty has been refining its gameplay for…how many years now? How many sequels? It’s incredibly foolhardy to go into a new franchise expecting the same level of polish. I think MOST gamers understand that. There will always be morons – I might be prejudiced here with my word choice – who think a game like Shadow of the Colossus has nothing to offer because of its poor frame rate, but that doesn’t change just how many gamers fell head-over-heels in love with it despite its flaws. Those same gamers might have been less willing to forgive a sequel for failing to fix the problem, myself included.
I think we agree that the system is unfair; we just disagree on whether or not it really matters, and possibly on the point of evidence seeming to imply that title is king. I would love to read a solid study of the issue, and I wouldn’t be surprised to discover you’re right. I still don’t think it’s a good way to defend a new game from poor reviews, since there’s currently no real way to prove or disprove the assertion.