Oscar Prediction Shake-Up? Nate Silver Given New Game-Changing Role By ABC

UPDATED WITH MORE DETAILS: The Academy Awards are important in and around Hollywood for primarily three reasons: the nominations which bring audiences into theaters as a movie marketing tool, the lucrative ‘For Your Consideration’ ads they generate, and the global telecast announcing winners so everybody can bask in their reflected glory. Now ABC is trying to corner the market on all with one move. Not only does the network broadcast the Oscars but its news division is guaranteeing data guru Nate Silver a role. How much of a Hollywood game-changer will this become? Not much of one judging from how little attention his movie awards prognostication has garnered in the past. Twice before, in 2009 and 2011, he sought to predict the Academy Award winners in 6 major categories based on a “mix of statistical factors”. His track record was 9 correct picks in 12 tries, for a 75% success rate. “Not bad, but also not good enough to suggest that there is any magic formula for this,” he wrote. For the 6 marquee categories he hadicapped in 2013, he was correct only for sure-things and missed the 2 that were more complex to predict. Meh.

I’ve been pondering this news scooped by Politico’s Mike Allen about all the  inducements ESPN/ABC News gave the 35-year-old to leave The New York Times, including extensive air time, a digital empire, and inclusion in the Oscars. A lot of showbiz websites and blogs large and small, smart and smarmy, clued-in and clueless, depend on their Oscar prognostication to drive traffic and foot bills. But unless Silver allows for the myriad variables that go into Academy Award noms and wins – insider stuff that Deadline knows from covering movie awards season in-depth – he won’t become more accurate.

For instance: Who’s popular, deserving, and appropriately humble enough to get nominated? Which film’s director is considered a douchebag whom nobody wants to win? What studio did a lousy job campaigning for the Academy Awards? How badly is Harvey Weinstein badmouthing the competition? I’ve always said that most Oscar voters are not just geriatric and cranky but also jealous and vengeful. Whether Silver’s statistical model can take into account those indiosyncracies and also cover more Oscar categories than just 6 is yet to be seen. But I’ll bet on Deadline’s own awards columnist Pete Hammond to beat Silver’s prognostications in 2014.

Obviously, the annual Academy Awards process isn’t as big a deal as U.S. national election campaigns. But interesting to note that Silver’s FiveThirtyEight blog was driving 20% of all traffic to the NYT as the last election electrified. That’s because in 2012 he correctly predicted the winners of all 50 states, in 2008 the winners of 49 out of 50 states, and the winners of all 35 U.S. Senate races that year as well. What ESPN/ABC offered was to return Silver to his flagship FiveThirtyEight.com and put him on air at ESPN and ABC, and develop verticals on a variety of new topics. And now he’ll work for the TV home of the Oscars. Odds are certain that Silver’s blog now will become one of the go-to places for Oscar dollars. But not for accuracy.

Can Silver truly become a trusted player in this showbiz space? Maybe. But he’ll have to do a lot better. Of course, if he’s wrong his first time out after being hyped way more than in the past, he’ll be laughed out of the biz. First, he has to stop relying on all the other film awards each year. They simply don’t matter. It might help if the Academy Of Motion Picture Arts & Sciences hands Silver its list of voters. Considering that AMPAS and ABC are joined at the hip because their broadcast pact goes at least through 2020, that’s doable. Whether or not the membership will resent having their privacy violated or participating in any polling is an impending challenge. Certainly the Academy over the years has discouraged voters from cooperating with any prediction schemes.

So what methodology will Silver use? As best as I can understand it (and, please remember that I’m mathematically challenged), it’s a so-called ‘educated and calculated estimation’ stemming from his reliance on statistics and study of performance, economics, and metrics. This guy first developed the Elo rating for Major League baseball, a system that calculates the relative skill levels of players. He then developed his PECOTA system for projecting performance and careers and sold it. His FiveThirtyEight is a self-created political polling aggregation website (which took its name from the number of electors in the U.S. Electoral College) using a calculated model. He needs to better adapt that to the Oscars instead of just relying on other awards shows.

Here’s what Silver wrote about his Oscar predictions in 2013:

This year, I have sought to simplify the method, making the link to the FiveThirtyEight election forecasts more explicit. This approach won’t be foolproof either, but it should make the philosophy behind the method more apparent. The Oscars, in which the voting franchise is limited to the 6,000 members of the Academy of Motion Picture Arts and Sciences, are not exactly a democratic process. But they provide for plenty of parallels to political campaigns.

In each case, there are different constituencies, like the 15 branches of the Academy (like actors, producers and directors) that vote for the awards. There is plenty of lobbying from the studios, which invest millions in the hopes that an Oscar win will extend the life of their films at the box office. And there are precursors for how the elections will turn out: polls in the case of presidential races, and for the Oscars, the litany of other film awards that precede them.

So our method will now look solely at the other awards that were given out in the run-up to the Oscars: the closest equivalent to pre-election polls. These have always been the best predictors of Oscar success. In fact, I have grown wary that methods that seek to account for a more complex array of factors are picking up on a lot of spurious correlations and identifying more noise than signal. If a film is the cinematic equivalent of Tim Pawlenty — something that looks like a contender in the abstract, but which isn’t picking up much support from actual voters — we should be skeptical that it would suddenly turn things around.

Just as our election forecasts assign more weight to certain polls, we do not treat all awards equally. Instead, some awards have a strong track record of picking the Oscar winners in their categories, whereas others almost never get the answer right (here’s looking at you, Los Angeles Film Critics Association).

These patterns aren’t random: instead, the main reason that some awards perform better is because some of them are voted on by people who will also vote for the Oscars. For instance, many members of the Screen Actors Guild will vote both for the SAG Awards and for the Oscars. In contrast to these “insider” awards are those like the Golden Globes, which are voted upon by “outsiders” like journalists or critics; these tend to be less reliable.

Let me show you how this works in the case of the Best Picture nominees. There are a total of 16 awards in my database, not counting the Oscars, that are given out for Best Picture or that otherwise represent the highest merit that a voting group can bestow on a film. (For instance, the Producers Guild Awards are technically given out to the producers of a film rather than the film itself, but they nevertheless serve as useful Best Picture precursors.) In each case, I have recorded how often the award recipient has corresponded with the Oscar winner over the last 25 years (or going back as far as possible if the award hasn’t been around that long).

Chart 1

The best performance has come from the Directors Guild of America. Their award for Outstanding Direction in a Feature Film has corresponded with the Academy Award for Best Picture a full 80 percent of the time. (Keep in mind that Best Picture and Best Director winners rarely differ from one another — although this year, as you will see, is very likely to be an exception.) The Producers Guild awards are the next most accurate; their award for best production in a feature film has a 70% success rate in calling the Academy’s Best Picture winner. Directors and producers are the movers and shakers in Hollywood, and any evidence about their opinions ought to count for a lot – as it does in our system.

By contrast, the Golden Globe for best dramatic motion picture has only matched with the Oscar winner about half the time. And some of the awards given out by critics do much worse than this: the Los Angeles Film Critics Association’s Best Film has matched the Oscar only 12 percent of the time, for example. Our formula, therefore, leans very heavily on the “insider” awards. (The gory details: I weight each award based on the square of its historical success rate, and then double the score for awards whose voting memberships overlap significantly with the Academy.)

Ideally, we would want to look not only which films win which the awards, but also how close the voting was (just as it is extremely helpful to look at the margin separating the candidates in a political poll). Unfortunately, none of the awards publish this information, so  I instead I give partial credit (one-fifth of a point) to each film that was nominated for a given award.

The short version: our forecasts for the Academy Awards are based on which candidates have won other awards in their category. We give more weight to awards that have frequently corresponded with the Oscar winners in the past, and which are voted on by people who will also vote for the Oscars. We don’t consider any statistical factors beyond that, and we doubt that doing so would provide all that much insight.

(First version posted at 3 AM)

  1. Very interesting. Could be a watershed moment(his expansion into other areas). I’d be very interested in his weather predictions which at this moment are woefully inaccurate in the U.S. due to the defunding of science projects by past Republicans presidents and congress. I don’t think Academy members are going to be too keen to give up their info. That would be an interesting fight. On years that I actually care I’m usually 98 percent correct for the major awards anyway. It really isn’t that hard to predict so I don’t know what Nate would bring to the party. He can crib my notes if he wants to.

  2. Fascinating post from Nikki here — it represents the exact the same debate that went on in baseball when sabermetrics gurus first came on the scene: statistical analysis vs. “scouting” (in this case, insider Hollywood info).

    1. “Exact same” is a little strong. In baseball, players are statistically analyzed on numerous aspects of their game to get a good impression of how they’ll perform on average over hundreds of games…whereas with the Oscars we’re guessing the opinions, not skills, of some old people. And we’re given far less data to predict how their opinions will swing.

      I read Silver’s site every day during the election and he’s brilliant at that stuff, but it just doesn’t translate to the Oscars.

  3. Like him, he is a medium genius or genius in his medium, but this game is tougher than predicting political outcomes because the fix always has a twist, a payback, or hex factor.

  4. And the L.A. Film Critics have a 12% accuracy rate which proves how out of touch and wrong they all are. A bunch of elite snobs.

  5. He came to fame for correctly predicting the Election. However, if he incorrectly predicts the Oscars (as most often do), then what? He’d have to predict the ones that are not the common choice and that the others get wrong

  6. Well put, Nikki.

    Silver’s Oscar picks in the past have been mediocre at best.

    Until he understands that the Academy Awards are nothing like Baseball or Politics, he won’t get any better. Hollywood is all about itself. You have to understand not just the industry works, but, how it THINKS.

    Knowing what won Best Costume Design in 1945 really means squat here. A lot of folks outside the Oscar prediction circle were still on the LINCOLN bandwagon long after ARGO had moved into the lead. And, Silver has no apparent experience in predicting the technical categories at all. (of course, he could just crib off off of Pete Hammond’s public posts!

  7. Silver is massively overrated. His performance in the 2010 midterms was mediocre at best, and with the exception of a few deluded GOP operatives most people were expecting a clean re-election for Barry (Romney had too many “must win” states that were at best in the margin of error for him). Silver’s “analysis” of polls is naive and error-prone in the long term. It’s the same thinking that failed to predict the 2008 market crash.

    Obviously using him for Oscar predictions is completely inane.

    1. Silver is not an infallible oracle, as some seem to think. However, he revolutionized first baseball projections and then election projections, and soon other fields as well. He’s awesome.

  8. Nikki…If a monkey threw poop at a list of nominees in each category, I’d bet on the monkey to beat Nate. At the lease it would be close. Hopefully Nate isn’t a sore loser.

  9. Silver’s analytics is flawed from the get-go: It assumes that most of the voting members of the Academy ACTUALLY vote and ACTUALLY see movies. Missing from his “scientific” formula are the newby film school grad turned PA who’s handed his boss’ ballot to fill out; The coffee clutch of studio assistants to executives who get together and fill out their bosses ballots en masse over lunch (’cause their bosses haven’t been to a movie theater in years); The nurse/caregiver in the Beverly Hills mansion who gets handed the ballot ’cause at least he/she watched all the screeners sent to the house; And the countless sons and daughters in high school and college who tell their voting parent(s) whose hot and what’s the cool movie.

    Add to the mix whether Scott & Harvey are friends or enemies that year, and which studio Harvey is suing to generate publicity for his Oscar push- that my friend, is the real analytics of Hollywood.

Comments are closed.