close
close

The odds between Trump and Harris are all over the place – and virtually meaningless

The odds between Trump and Harris are all over the place – and virtually meaningless

Donald Trump has a 53 percent chance of winning the presidential election, according to the controversial report Silver Bulletin Election Prediction declared mid-week.

Fifty-one percent responded to its main competitor, 538.

True obsessives scour the forecasts daily, looking for any sign that their favorite candidate is getting closer to the White House.

But even casual consumers of political news encounter the latest from Silver Bulletin and 538 on cable news, in the newspaper or on social media.

They have become a mainstay of public life—tidy, if often unnerving, estimates of who is ahead and who is under in the Battle for America’s Soul.

But the models have had some rough patches over the years, most notably in 2016, when they appeared to underestimate — and in some cases grossly underestimate — Trump’s chances of winning his first presidential race.

And more recently, as these predictions have come to occupy more and more space in our political imagination, some bigger questions have emerged.

Do they really represent the advances in the predictive arts that their owners claim?

And perhaps most importantly, should we worry about what they are doing to our political culture?

What it means for a prediction to be ‘correct’

Nate Silver has always been obsessed with numbers.

“One time when we were taking him to kindergarten, we dropped him off and he announced, ‘Today I’m a number machine,’ and started counting,” recalled his father, Michigan State University political science professor Brian Silver, himself ever. “When we picked him up two and a half hours later, he was ‘Two thousand one hundred and twenty-two, two thousand one hundred and twenty-three.’ . . ”

At age 11, he was compiling a sophisticated analysis of whether baseball stadium size affects attendance (it doesn’t). And after college, he sold a model that predicts player performance to the statistical newspaper Baseball Prospectus.

In 2007 his career took a turn.

When he watched the television coverage of the Democratic presidential election, he couldn’t believe how weak it all seemed.

So much bloom. So little data.

And when the experts did bother to cite the polls, they often did so incorrectly, sticking to the odd survey outlier here and there. to proclaim new “momentum” for a candidate who may not be going anywhere.

So on his FiveThirtyEight website — named for the number of votes in the Electoral College — he built something different: a forecasting engine that collected polls, weighed state demographics and evaluated voting patterns from decades ago.

And he rose to fame when he correctly predicted how 49 of the 50 states would vote in the 2008 presidential election – and He then outdid himself by going 50 for 50 in 2012.

The day after the election, the Christian Science Monitor went so far as to ask whether Silver “destroyed expert.”

In 2016, numerous news outlets published election predictions, and they generally rated Hillary Clinton as a heavy favorite.

But Silver had some cover when the shock result came in.

He had given Trump a 29 percent chance of victory, while other forecasters gave him a chance of 1 or 2 2 percent shot. “Those are really different answers,” Silver later said advocated The Washington Post. “One says, ‘Look, Trump is going to win the election about as often as a good baseball player gets a base hit.’ And someone says, ‘This is a once-in-a-blue-moon scenario.’”

It was a fair point.

But an uncomfortable question lingered: Can a model that gives a candidate a 29 percent chance of victory really claim to be somehow “right” when the heavy underdog wins by more than 70 electoral votes?

Would that still have been the case? ‘good’ at 20 percent? At 15?

And how reliable were the figures that formed the core of these predictions?

Read Silver long description of the 2020 presidential election model he built for FiveThirtyEight – a model He moved to Silver Bulletin largely intact last year – and you are struck not only by how carefully thought out the algorithm is, but also by the many judgments on which it is based.

Silver had to estimate how COVID-19 would affect the election; he made the decision to split Trump’s “home state adjustment” between the state where he built his business (New York) and the state of his official residence (Florida); and he created a comprehensive “uncertainty index” that measures, among other things, “the volume of major news, as measured by the number of full headlines in the New York Times in the last 500 days, with more recent days weighing more heavily.”

You could be forgiven for relying on the assumptions of a model that seems to perform quite well most of the time; Silver’s final prediction for the 2020 election gave Joe Biden an 89 percent chance of beating Trump.

Nate Silver.Richard Burbridge

But one recently appeared paper by Justin Grimmer, from Stanford University; Dean Knox, from the University of Pennsylvania; and Sean Westwood, of Dartmouth College, argues that there is actually no way to assess the effectiveness of an election forecasting model in the short or medium term.

If you want to measure the effectiveness of a financial or weather forecasting model, you can test it against millions of real-world observations.

But no comparable data set exists for presidential elections, which are relatively rare events; we are now nearing the end of just the 60th presidential contest ever.

The researchers found that it would take at least 24 presidential elections (over 96 years) and as many as 559 presidential elections (over 2,236 years) to say with certainty that an election model is better at predicting winners than a simple coin flip.

Distinguishing between one advanced model and another could take even longer.

Election predictions cannot be wished away.

People will always try to see into the future.

And Andrew Gelman, a statistician from Columbia University who advised on the development of The Economist the magazine’s election prediction modelsays you have to do it the best you can.

It’s better to aggregate the polls, pull out the economic data that seems to move voters, and use what we know about past voting patterns, than to provide ill-informed, hot insights about the latest one-off poll from Michigan or Arizona.

“As the great baseball analyst Bill James once said, ‘The alternative to good statistics isn’t no statistics, it’s bad statistics,’” Gelman says.

But good statistics don’t always end up in the right way.

Election models are based on probability: candidate A has a 55 percent chance of winning. But at one laboratory experiment published 2020, Dartmouth’s Westwood; Solomon Messing, then of the data outfit Acroniem; and Yphtach Lelkes of the University of Pennsylvania found that more than a third of people who see these predictions confuse probability with vote share.

In other words, when they read that Candidate A has a 55 percent chance of winning, they think that means she has the support of 55 percent of voters.

Those are very different proposals. A candidate with a 55 percent chance of victory participates in a virtual toss-up election. But if polls show a candidate has the support of 55 percent of voters — and only 45 percent of the opponent — she has a very solid lead.

However, that is not the most worrying result from the study.

It also suggests that election forecasts could depress turnout. Potential voters who learn that a candidate has a high probability of winning or losing are more likely to skip the vote – assuming that their ballot will make no difference to the outcome.

One critical finding: probabilities, however inscrutable to some voters, seem to carry special weight.

When potential voters in the experiment instead saw something resembling the expected vote share — in other words, when they saw something like simple polls instead of a probability score — they were just as likely to vote as before. .

Of course, this is just one study, conducted in a laboratory. But it does suggest a way forward.

Rather than putting forward opaque and weighty-sounding probabilities, it might be better to focus on simple opinion polls, as outlets like The New York Times do.

Voters can more easily understand what a data journalist means when he says he combined all the surveys and found that, for example, 51 percent of voters support Trump and 49 percent support Kamala Harris.

Grimmer, the Stanford political scientist, says focusing on polls alone also creates opportunities for greater transparency, making it easier to reveal the decisions forecasters make about which data to prioritize.

“You could have a little widget on your site,” he suggests, that would allow readers “to weight the polls differently and get a different result.”

Putting everything into perspective

Whether a news outlet decides to aggregate polls or make more complicated predictions, says Jay Rosen, a professor of journalism at New York University, it’s important to keep all predictions in perspective.

Outlets should make it available, but it “shouldn’t be central” to their campaign coverage, he says.

Reporters should instead focus on the issues that voters care about.

The focus should be, as Rosen likes to say, “not on the opportunities, but on the stakes.”

Of course, most major media organizations can’t resist diving into the opportunity. In the who is above and who is below.

But that doesn’t mean you have to.

The next time you come across a prediction, pause for a moment if you need to. But recognize what it can tell you and what it cannot tell you.

Then move on to something meatier.


David Scharfenberg can be reached at [email protected]. Follow him @dscharfGlobe.