Twice before, in 2009 and 2011, I sought to predict the Academy Award winners in six major categories based on a mix of statistical factors. My track record? Nine correct picks in 12 tries, for a 75 percent success rate. Not bad, but also not good enough to suggest that there is any magic formula for this.
So this year, I have sought to simplify the method, making the link to the FiveThirtyEight election forecasts more explicit. This approach won’t be foolproof either, but it should make the philosophy behind the method more apparent. The Oscars, in which the voting franchise is limited to the 6,000 members of the Academy of Motion Picture Arts and Sciences, are not exactly a democratic process. But they provide for plenty of parallels to political campaigns.
In each case, there are different constituencies, like the 15 branches of the Academy (like actors, producers and directors) that vote for the awards. There is plenty of lobbying from the studios, which invest millions in the hopes that an Oscar win will extend the life of their films at the box office. And there are precursors for how the elections will turn out: polls in the case of presidential races, and for the Oscars, the litany of other film awards that precede them.
So our method will now look solely at the other awards that were given out in the run-up to the Oscars: the closest equivalent to pre-election polls. These have always been the best predictors of Oscar success. In fact, I have grown wary that methods that seek to account for a more complex array of factors are picking up on a lot of spurious correlations and identifying more noise than signal. If a film is the cinematic equivalent of Tim Pawlenty — something that looks like a contender in the abstract, but which isn’t picking up much support from actual voters — we should be skeptical that it would suddenly turn things around.
Just as our election forecasts assign more weight to certain polls, we do not treat all awards equally. Instead, some awards have a strong track record of picking the Oscar winners in their categories, whereas others almost never get the answer right (here’s looking at you, Los Angeles Film Critics Association).
These patterns aren’t random: instead, the main reason that some awards perform better is because some of them are voted on by people who will also vote for the Oscars. For instance, many members of the Screen Actors Guild will vote both for the SAG Awards and for the Oscars. In contrast to these “insider” awards are those like the Golden Globes, which are voted upon by “outsiders” like journalists or critics; these tend to be less reliable.
Let me show you how this works in the case of the Best Picture nominees. There are a total of 16 awards in my database, not counting the Oscars, that are given out for Best Picture or that otherwise represent the highest merit that a voting group can bestow on a film. (For instance, the Producers Guild Awards are technically given out to the producers of a film rather than the film itself, but they nevertheless serve as useful Best Picture precursors.) In each case, I have recorded how often the award recipient has corresponded with the Oscar winner over the last 25 years (or going back as far as possible if the award hasn’t been around that long).
The best performance has come from the Directors Guild of America. Their award for Outstanding Direction in a Feature Film has corresponded with the Academy Award for Best Picture a full 80 percent of the time. (Keep in mind that Best Picture and Best Director winners rarely differ from one another — although this year, as you will see, is very likely to be an exception.) The Producers Guild awards are the next most accurate; their award for best production in a feature film has a 70 percent success rate in calling the Academy’s Best Picture winner. Directors and producers are the movers and shakers in Hollywood, and any evidence about their opinions ought to count for a lot – as it does in our system.
By contrast, the Golden Globe for best dramatic motion picture has only matched with the Oscar winner about half the time. And some of the awards given out by critics do much worse than this: the Los Angeles Film Critics Association’s Best Film has matched the Oscar only 12 percent of the time, for example. Our formula, therefore, leans very heavily on the “insider” awards. (The gory details: I weight each award based on the square of its historical success rate, and then double the score for awards whose voting memberships overlap significantly with the Academy.)
Ideally, we would want to look not only which films win which the awards, but also how close the voting was (just as it is extremely helpful to look at the margin separating the candidates in a political poll). Unfortunately, none of the awards publish this information, so I instead I give partial credit (one-fifth of a point) to each film that was nominated for a given award.
The short version: our forecasts for the Academy Awards are based on which candidates have won other awards in their category. We give more weight to awards that have frequently corresponded with the Oscar winners in the past, and which are voted on by people who will also vote for the Oscars. We don’t consider any statistical factors beyond that, and we doubt that doing so would provide all that much insight.
Sometimes, of course, it shouldn’t require a formula to know who is going to win. Such is the case with the Best Picture nominees this year. One film has dominated the category, and it is “Argo.”
“Argo” has won the top awards given out by Hollywood directors, producers, actors, writers and editors, all of whom will also vote for the Oscars. It also won the Bafta (British Academy of Film and Television Arts) award for Best Picture, whose membership has significant overlap with the Academy.
“Zero Dark Thirty” may have won slightly more critical acclaim, but the critics do not vote for the Oscars; the insiders do. And there has been absolute consensus for “Argo” among the insiders. It would be an enormous upset if it were to lose. (“Lincoln,” once considered the front-runner, has been nominated for almost every best picture award but won none of them. Counting on a comeback would be a bit like expecting Rudolph W. Giuliani to have resurrected his campaign in Florida in 2008 after finishing in sixth place everywhere else.)
If “Argo” is a shoo-in for Best Picture, then you might expect Ben Affleck to be the clear favorite for Best Director as well. And he almost certainly would be — if only he had been nominated.
Instead, in what might have been karmic payback for “Gigli”, Mr. Affleck was snubbed by the Academy. So despite winning the Directors’ Guild award, the Golden Globe and the Bafta for best director, Mr. Affleck will not win an Oscar in this category.
The next most-common winner of best director awards, after Mr. Affleck, has been Kathryn Bigelow, for “Zero Dark Thirty.” But Ms. Bigelow was snubbed by the Academy as well.
This creates a tremendous problem for any method that is based on looking at other award winners. In fact, it gives me unhappy memories of ourinfamous Taraji P. Henson pick in 2009. (In that case, there was a lot of disagreement about which actresses were nominated into the leading and supporting categories, making it hard to track one award to the next one.)
One thin reed is that David O. Russell won the Satellite Award for “Silver Linings Playbook”. This is, in fact, the only one of the nine awards we track whose winner was even nominated for the Oscar. However, the Satellite Award has historically matched the Academy Award for Best Director only 38 percent of the time, so it gets little weight in our system.
Instead, the method defaults to looking at partial credit based on who was nominated for the other awards most frequently. Among the five directors who were actually nominated for the Oscars, Steven Spielberg (for “Lincoln”) and Ang Lee (“Life of Pi”) were nominated for other directorial awards far more often than the others, and Mr. Spielberg slightly more regularly than Mr. Lee. So the method gives the award to Mr. Spielberg on points, but it’s going to be blind luck if we get this one right: you can’t claim to have a data-driven prediction when you don’t have any data.
One place where “Lincoln” will almost certainly pick up hardware is for Best Actor, where Daniel Day-Lewis should win for his portrayal of the 16th president. Bradley Cooper (“Silver Linings Playbook”) did win the Satellite Award and the National Board of Review’s award for best actor, but neither has a strong track record, whereas Mr. Day-Lewis has swept the awards that predict Oscar success well.
There is considerably more uncertainty in the Best Actress category, and it is here where our practice of weighting the awards based on their past reliability may be the most helpful.
Jennifer Lawrence, Mr. Cooper’s co-star in “Silver Linings Playbook,” won the Screen Actors Guild award for Best Actress. That has been the single most reliable award in the Best Actress category in the past, corresponding to the Oscar winner two-thirds of the time. This is very possibly because actors and actresses make up the largest plurality of the Academy’s 15 groups of voters, creating especially strong overlap between the populations.
However, the SAG Award still goes wrong one-third of the time (it did so as recently as last year, when Viola Davis won it, while Meryl Streep won the Oscar). Ms. Lawrence would not necessarily be favored if there were a consensus against her in the other awards.
There isn’t really any such consensus, however. Jessica Chastain won the Golden Globe for best Best Actress in a drama, but she was not matched up directly against Ms. Lawrence, who was nominated for (and won) the comedic category instead. Ms. Chastain also won several awards given out by critics, but these have less predictive power. (It also seems reasonably clear that Academy members are not enamored of “Zero Dark Thirty.”)
The 85-year-old Emmanuelle Riva (“Amour”), meanwhile, won the Bafta in an upset and is now attracting a lot of buzz as a potential Oscar surprise. But one rule-of-thumb in elections analysis is that “momentum” is often given too much credence by pundits. I suppose I can’t say for certain that the same is true in Oscars coverage — and perhaps it is more relevant in the case of a film like “Amour,” which may not have been seen by all that many Academy members until recently. But the SAG Awards have a better track record than the Baftas across all acting categories, despite usually predating them on the calendar. The safe money therefore remains on Ms. Lawrence, with Ms. Riva and Ms. Chastain being viable alternatives.
This is almost certainly the most competitive category: all five nominees have won Oscars before, and there is no consensus choice. In fact, the competition was tough enough this year that some well-known actors (like Leonardo DiCaprio for “Django Unchained”) that won critics’ awards were not even nominated by the Academy.
As is the case for the other acting categories, however, our method tends to default to the SAG winner when there is a lack of consensus otherwise: that was Tommy Lee Jones for “Lincoln.”
Christoph Waltz (“Django Unchained”) won both the Golden Globe and the Bafta (and might be my choice if I were going based on which performance I liked the most personally, instead of trying to guess at what the Academy will do). But one red flag is that Mr. Waltz was not even nominated for several other awards, including the SAG, suggesting that he may lack support among some Academy constituencies.
Phillip Seymour Hoffman (“The Master”) could be a compromise candidate: he was nominated more widely than Mr. Waltz, and won more awards than Mr. Jones. But his wins came in awards that were voted on by critics, historically the least reliable.
Finally, there is the sentimental choice: Robert De Niro for “Silver Linings Playbook,” who last won an Oscar in 1981 (“Raging Bull”). However, Mr. De Niro has not any major awards for “Silver Linings Playbook,” and so ranks last among the five nominees by our statistical method.
Simply put, it would be unusual for an actor to win an award after having been shut out previously. If he wins, it might constitute evidence that members of the Academy are treating the Oscars differently than they do other awards, using them as a proxy for lifetime achievement – or that lobbying efforts on behalf of Mr. De Niro had been successful. But such attempts at log-rolling could also backfire: it’s plausible, for example, that votes received by Mr. De Niro could come at the expense of Mr. Waltz, since both films were produced by the same studio, making it easier for Mr. Jones to win by plurality.
There is considerably less reason for last-minute campaigning in this category: Anne Hathaway as about as safe a bet to win for “Les Misérables” as Mitt Romney was to win Utah. If Sally Field or Amy Adams wins instead, it will probably be time for me to retire from the Oscar-forecasting business.
0 comments