How to Understand the Election Polls You're Seeing Right Now

In this photo illustration, the names of candidates for the 2024 Presidential election appear on a vote-by-mail ballot on Oct. 23, 2024, in Silver Spring, MD. Credit - Chip Somodevilla—Getty Images

This article is part of The D.C. Brief, TIME’s politics newsletter. Sign up here to get stories like this sent to your inbox.

As we finally hit the last weekend before Election Day, plenty of our friends are suddenly experts on the polls. Whether it’s a political circle, a YA book club, or even the line at the grocery store, the chatty ones have just one topic atop their minds. Did you see that gender gap in The New York Times’ final survey last week? What about Thursday’s Gallup poll showing voter intensity among Democrats higher than any point in the last 24 years? But I heard on NPR that Trump is polling better than any Republican in the last two decades, including when Trump won in 2016? It can be a lot.

For those who want to do it, going down the rabbit hole of polls can be a choose-your-own-adventure tale of self-assurance, self-torture, and deep confusion. And, to be frank, each path is entirely valid.

Sure, there are plenty of metrics by which to pour through to assess the health and potential of the two presidential campaigns. Campaign finance data, ad strategy, where the candidates are planting themselves in the final days. Oh, and don’t even get me started about the imprecise modeling behind early-vote numbers.

But, really, polls are the easiest way to get a sense of the race. In August, we published a primer on how to read the polls like a pro. But in the final days of an election cycle like no other, many wonder if pollsters are getting the presidential race completely wrong… again. Here’s a rundown of why polling in 2024 is different from any other year, and why that’s creating more confusion about who’s getting it right.

Don’t all the polls show this is basically a coin toss?

Yes, but no.

With apologies to readers who are looking for an easy answer, one is not in the offing. As Republican pollster Kristen Soltis Anderson notes, the numbers are remarkably consistent across different surveys even as pollsters are following a different set of assumptions to get there. The Times poll showing a tied race at 48% and the CNN poll showing a tied race at 47% can be accurate in each case, but there are big differences in how they reach similar conclusions.

Put in legal terms, Jurors A and B can both find someone guilty of a crime but get that verdict by prioritizing a different set of facts. That doesn’t mean the defendant is not guilty, but the rationale for each juror can be as true as it is divergent.

Part of this multi-track path toward the same end comes down to different polling shops prosecuting different theories of the electoral case. Is Harris changing the electorate in ways unseen before, with a dramatic—and still unrealized—success among women and college-educated voters? Is she putting together the old Obama coalition from 2008? Is Trump reviving the base a la 2016 or is he banking on a different coalition that has grown more tolerant of his disregard of norms? And should the voting patterns of 2020 be ignored, given that we were in the middle of a pandemic? All of those scenarios can be true, but to what degree? Different pollsters consider some of these questions more relevant than others in deciding who will turn out.

So, yes, polls are close. No one in either camp is sleeping comfortably these days, if they’re sleeping at all. The candidates are busy for a reason: this thing may be decided by fewer than 100,000 people in three (still-unknown) states. And no one knows who they are.

So these polls aren’t all using a common baseline?

Nope. Not even close, if they’re being honest. Every polling operation has to use its own best understanding of who will actually show up. Usually, as Election Day draws closer, pollsters shift from a wider universe of registered voters to likely voters—and therein comes a blend of statistical modeling, historical trends, and more than a little gut.

The co-director of Vanderbilt University’s solid polling operation, Josh Clinton, published an incredibly useful illustration of this challenge. Using a raw dataset from a national survey taken in early October, the wonk found Harris is ahead by about 6 percentage points. That finding reflects who the pollsters were able to reach, which may not accurately reflect who ultimately turns out to vote. That’s where every pollster makes different decisions on how to adjust the raw data. When Clinton adjusts the data to fit the 2022 turnout universe, Harris is actually up 8.8 percentage points. Plug-in 2020’s turnout, it’s a 9-point race in Harris’ advantage. And if you use the 2016 figures, Harris still wins by 7.3 percentage points.

But then this is where things can get interesting. If you overlay modeling on how many voters out there identify as Democrat, Republican, or neither, you can get vastly different looks at the race. If you believe Pew Research Center’s data of the nation’s electorate, Harris’ lead shrinks to a 3.9 percentage point head start if turnout resembles 2020. Pivot to Gallup’s snapshot of the electorate and that advantage drops to 0.9 percentage points. So, you can see how modeling alone, using the same raw numbers, can swing this race by 8 points. And that’s just the most basic example of how a tweak here—on just one input question—and a bump there on dozens of other factors can throw the whole system.

This is happening at every single polling outfit in the political universe, and each set of data nerds is looking at the datasets through different lenses. It’s why the same set of voters can say the same thing to pollsters and see themselves reflected in an entirely different race. There’s a reason why we had to show our work in math class; the process matters as much as the answer.

So we shouldn’t compare, say, the CNN polls with the New York Times Polls?

Absolutely not. The best practice is to compare like with like.

This year includes the added twist of Democrats dumping Joe Biden for Kamala Harris as their nominee in July. Basically, most comparisons between pre- and post-Biden Exit have a limited utility. The same is true for cross-pollster comparisons given they’re all making different assumptions about the electorate.

There’s also little value in comparing polls of registered voters and those of likely voters. They’re completely different universes.

Wait. Did no one fix political polling after 2016?

The 2016 polls became punchline and gut punch after their misalignment with reality became apparent quickly on Election Day. Hillary Clinton, after all, was thought to be coasting toward a clean defeat of Trump. But with the benefit of hindsight, it was pretty clear that the pollsters assumed too many college grads would show up, as just one of the most obvious misses. Pollsters did their best to fix it four years later, but again the polls thought Biden would do better than he did.

Part of it is the Trump effect, which again has pollsters second-guessing themselves, and in particular, which factors matter most in gaming out voter behavior. A research team at Tufts University did a survey of, well, the surveys, and found that some of the biggest shifts on the back-end modeling since 2016 have come in giving much more heft to education, voting history, and where the voters actually live. They also document a shift away from giving respondents’ income and marital status too much clout. Most pollsters have also adjusted the weight they give to age, race, and gender.

So, yes, pollsters have taken steps to iron-out the wrinkles that were so apparent in 2016. But this is a public-opinion science that has to bake-in some assumptions. And those are simply those: best-educated guesses about the universe in play.

(Just to be contrarian: A credible counter argument is that the polls in 2016 weren’t that off, just that the national surveys didn’t match the state-by-state results that mattered most. Clinton allies would rather blame the pollsters for inflating her voters’ confidence to the point of complacency, but the reality is far more nuanced.)

So You’re Saying We Should Cool It With The Polls?

Absolutely. Polls are informative, not predictive. By the time you read them, they’re already out of date. Each one of them is making some informed assumptions about who will bother to cast a ballot. Almost every crosstab from a pollster’s latest release includes a judgment call, and no one gets all of them correct.

But let’s be honest: we won’t cool it. It’s just not what the armchair wonks know how to do. After two—if not four—full years of waiting for this final push, the catnip of these numbers is too much. It might be a waste of time, but ultimately it could actually have virtue in the most unlikely of ways.

The closeness of the polls may be a bankshot for getting more people to vote if they think they may actually determine the outcome. So, in that, these tight polls might be good for the exercise of democracy and simultaneously garbage for the discussion of it. Yet it’s all we’re going to talk about for the next few days—and maybe beyond if the expectations they create are too far afield of the outcome. I’m as likely to be as guilty of this as anyone. And, no, I probably won’t repent.

Make sense of what matters in Washington. Sign up for the D.C. Brief newsletter.

Write to Philip Elliott at philip.elliott@time.com.