Today’s analysis of our model looks remarkably similar to last week’s, as well as the week preceding that one. Our presidential model gives Kamala Harris a 55% chance of winning the election. This is almost identical to the 57% chance our model gave her over the past two weeks.
The root cause of this is simple: ever since Biden dropped out, the polls have told a fairly consistent story, and you have to squint to see any particularly noteworthy changes. This week is no different, and with only 23 days until the election, it does not seem likely that something huge upends the overall trajectory of the race.

We do have one noteworthy change: for the first time in a while, all seven core swing states are rated as a Toss Up. This may seem like a bit of a cop out, but that simply means our model has neither Harris nor Trump at over 60% odds in any of these states. It still sees Harris as the slightest of favorites in Michigan, Wisconsin, Pennsylvania, and Nevada, while Trump is very narrowly favored in Arizona, Georgia, and North Carolina.
As we have repeatedly emphasized, however, there is not a huge difference between being given a 55% chance of winning and losing. As an example, consider the box below. Can you tell whether it’s 55% blue or red?

It’s quite hard to tell, and we’d wager that a lot of our readers would get it wrong (the correct answer is red). The most you can generally do here is say that it looks roughly equal. And that’s the state of the race in basically every single swing state — we estimate Harris is slightly ahead in the Midwest and in Nevada, while Trump is slightly ahead in the Sun Belt, but the advantage is far too small to say anyone is conclusively favored.
That brings us to a somewhat existential question: what is the point of our forecast, if all it can do is tell you that the race is a Toss Up? The answer lies in the forecast’s core strength as a quantitative, not qualitative, model.
While it may sound cliche, the model does not spend all day on Twitter. It cannot react to a “vibe shift”, nor does it take into account a news article discussing how the polls seen by campaign insiders are causing panic. This may sound like a limitation, since the model is not accounting for information that the political universe is aware of. And indeed, compared against an omniscient model that could see inside the heads of every pollster, public or private, it would be. But that is not how information in politics works.
Throughout any campaign, but especially toward the end, there is always significant buzz about “insider information.” The narrative usually goes something like this: the polls say one thing, but if you ask a campaign “insider,” they’ll tell you the public polls are wrong. Internally, their numbers show something completely different (depending on the narrative they wish to craft, this means they are either running ahead or behind the public polls).
A variant of this is the public release of an internal poll. These polls usually skew in favor of the candidate they are conducted for, often to prove that the candidate is in a competitive position to win.
Our model accounts for internal polls through the following practice: firstly, it downweights them quite heavily, because the source is not necessarily presenting an unbiased picture of the information, and secondly, it adjusts for their historical bias — internals and partisan polls are usually skewed by four points in favor of the party releasing them, and our model corrects for this. (Presidentially, this actually makes little difference, because we have enough nonpartisan polls, but it makes a substantial difference for downballot races, where polling is much sparser).
It is worth noting that internal polls having a skew does not mean that campaign pollsters are conducting terrible polling. Quite the contrary, in fact, as campaigns have little incentive to operate on poor data. Internal polls are real polls, and they’re used for very serious and granular analysis that often goes far beyond what public polls are capable of, in both demographic analysis and message testing.
Internal polls are, however, selectively released, even if there is nothing wrong with the poll itself. Imagine, for example, that a campaign’s internal polling in Georgia consisted of all the public polls of the state released in the past few weeks. Depending on the narrative they wished to craft, they could choose a Trump +5 poll (Quinnipiac got this result recently) or a Harris +3 one (Fox News). However, in the aggregate, our polling average has remained at around Trump +1 for the past several weeks, as has our model’s forecast.
Consequently, it is not the data itself, but its selective release into the public that causes issues, especially since the data is often meant to craft a certain narrative. There are similar narrative biases in how the media treats “insider information,” even when it does not directly involve a released internal poll.
There is, of course, a bias toward noteworthiness. You are not likely to hear a story about how people “in the know” actually agree wholeheartedly with the conventional wisdom. This is true of both published media and internal chatter. A poll in a competitive race showing one candidate up 10 points may make the rounds online and in political circles, but this often happens because it is a surprising outlier that is not representative overall. Due to this, an “insider scoop” may even be inversely correlated with reality.
Campaigns have a vested interest in setting various narratives (as it happens, in this specific election, both the Harris and Trump campaigns have telegraphed that their preferred narrative is Harris as an underdog). Even when they may not do this intentionally — no campaign is entirely leak-free — the information ecosystem in politics is complicated, and a message can easily become garbled by the time it reaches you.
Finally, while campaigns do conduct high-quality polling, there is no evidence that this polling is leagues better than what is available publicly. The private polling universe does, however, win on quantity. Only a small fraction of polls conducted ever make it to the public eye; the universe of data is much larger than what is seen publicly, but it is difficult to peer into it with an unbiased view.
If private pollsters had some special methodology that defied “polling error,” there would be quite a bit of incentive to go public and become the world’s most accurate pollster. Instead, we are reminded of the fact that in 2012, Mitt Romney’s polling made him so confident of victory that he prepared no concession speech.
That brings us back to the utility of our model. What our model can do that humans cannot do is that it can remain unbiased in the face of the gargantuan, hulking monster that is the political information ecosystem. It adjusts, when it can, for internal polls on a consistent basis. It does not make any attempt at sorting through online “vibe shifts” caused by narratives that are often driven by flawed or biased data (if they are data-driven at all).
Often, the “vibes” will shift without any corresponding movement in the data. This is where the value of a model comes in, because it does not care about vibes. It looks exclusively at the data, instead of what pundits on Twitter may be saying. It has been remarkably consistent in its output, weathering multiple weeks of “vibe shifts.” If the race were to meaningfully shift, it would be because the underlying data changed. Until that happens, it will continue to say what it has always said: this race is a Toss Up.
That doesn’t mean the election is guaranteed to be close in terms of the actual outcome — recall that a Toss Up fundamentally means that the evidence doesn’t conclusively point to one candidate being clearly ahead. Polling error is correlated, so it’s actually more likely than you might imagine that one candidate is underestimated by surveys and sweeps all swing states come election day. In fact, in our model, there’s a 26% chance that Kamala Harris wins all 7 swing states, and a 25% chance that Donald Trump does so.
But we have no way of knowing ahead of time which way the polling error will break, because it’s famously volatile and unpredictable. Response environments change across cycles, and pollsters make corrections to avoid repeating recent misses, so it’s not possible to confidently predict which side will be underestimated by polls. That’s why the best thing to do is to assume that the polls will be unbiased and keep wide uncertainty bands, which is what our model does.
Senate

Our model’s Senate forecast is quite literally unchanged from last week’s, with Republicans at 73% favorites to win the chamber. The closest race in the country is still in Ohio, where Democratic Senator Sherrod Brown maintains a narrow lead in the polling averages. Whether Republican Bernie Moreno can erase this lead in the final weeks could make or break Democrats’ underdog battle for the Senate.
However, even though the GOP have made little headway in other swing state Senate races, Democrats could pull out a win in Ohio and still lose the chamber. Republicans are guaranteed a flip in West Virginia and polling continues to worsen for Democratic Sen. Jon Tester in Montana, who now trails in our polling averages by a larger margin than challengers Colin Allred in Texas and Debbie Mucarsel-Powell in Florida. To keep the chamber, Democrats will need one of those three, but a win in any of them would be a substantial upset.
House

Our model’s House forecast is arguably the only one to see a noteworthy change in the past few weeks. A month ago, our forecast gave Democrats a 61% chance to flip the House, but this is now down to 52%. While not a huge difference, the race for the chamber has now moved from being slightly Democratic-leaning into Toss Up territory. However, it is worth noting that House polling is by far the sparsest; much of the available data comes in the form of aforementioned internal polls and “generic ballot” polling.
The race for the House may be overshadowed by high profile Senate races as well as the presidential race, always the political king of the hill. It may be, however, the most competitive in the country. Although the House may simply fall in the direction the presidential race goes, it is still quite possible that either Harris or Trump could be denied a trifecta thanks to a downballot House race that is assuredly not getting much attention at the moment.
I am an analyst specializing in elections and demography, as well as a student studying political science, sociology, and data science at Vanderbilt University. I use election data to make maps and graphics. In my spare time, you can usually find me somewhere on the Chesapeake Bay. You can find me at @maxtmcc on Twitter.
I’m a computer scientist who has an interest in machine learning, politics, and electoral data. I’m a cofounder and partner at Split Ticket and make many kinds of election models. I graduated from UC Berkeley and work as a software & AI engineer. You can contact me at lakshya@splitticket.org

