Pollstradamus - US Election Forecasting

2024 Report Card

Grading Myself vs. The Competition

March 29, 2025

Grading My Foray

The 2024 Election was my first foray into using a data-driven model to predict an electoral outcome. Like other political junkies, I had read through polls in past elections, trying to divine a map from the tea leaves, but I had never used numbers like I did this past year.

So, how did I do?

I ran three forecasts: (1) President, (2) Senate, and (3) House. If you were to rank the amount of effort I put into them, that would also be the order. And… if you were to rank how well I did in each of them, that, too, would be the order. President then Senate then House.

Presidential Grade

A couple weeks back, I released a graphic comparing the presidential prediction performance for the major outlets versus myself. I beat all of them. I outperformed them when looking at all contests (50 states plus DC) as well as when looking at just the seven swing states.

Forecast All States Error Swing States Error Correct Candidate
Pollstradamus 2.29% 0.62% 100%
FiveThirtyEight 2.64% 2.27% 92.86%
JHK 2.72% 2.09% 92.86%
The Economist 3.12% 2.54% 96.43%
Silver Bulletin 3.28% 2.16% 96.43%

Each of these five forecasts put out the margins they expected one of the two candidates to win by in each state. For example, in New Mexico, the five forecasts expected Harris to win by between 6.00% to 7.40%. One forecast—I won’t name names—guessed Harris’ win in NM down to the hundredth.

Looking at the error for “all states,” Pollstradamus outperformed the field. Mind you, the total range in error between the five forecasts here only spans 0.99%, but being a forecast running on a $0 budget, I think it should be a point of pride that I bested them all.

Looking at the error for swing states, a much bigger separation emerges. The other forecasts were off in each of the seven swing states by over 2%. Meanwhile, this website missed the exact margin of victory—down to the hundredth—by just 0.62% on average. Wisconsin and Pennsylvania, for example, were my best swing states. I missed each by less than a quarter of a percent (0.23% and 0.24%, respectively).

When it came down to guessing the correct candidate, Pollstradamus prevailed once again, with a 100% forecast accuracy. This, to me, is the lowest mark of skill. Most Americans and political prognosticators understood this was a race that came down to seven state races. Understanding that, the worst performance possible was 87.5% (49 out of 56).1 However, there were some who had delusions about a blue Iowa or a red Virginia2, but those were never really in the cards.

Overall, I’m happy with my performance here, and if I may be so bold as to give my own grade, I would give myself an “A.”

Senate Grade

It was August, and I had polished my presidential model to satisfaction. But, I was not satisfied overall. There were, afterall, many other high profile races up for a vote in November, and if the other models were putting up forecasts for those, why couldn’t I?

So, I set out to make my Senate model.

In modern American voting, ticket splitting happens less and less often. Who we vote for at the top of the ticket largely informs who we vote for down the rest of the ballot. That was the theory underpinning the Senate model.

And, for the first two elections in the Trump Era, that proved to be ironclad. In the 66 Senate races that occurred in 2016 and 2020, only one Senate candidate won in a state where her party’s presidential nominee lost. Only one. That would be Susan Collins (R) from Maine, hailing from a state that voted for President Biden in 2020.

This X post of mine shows this graphically.

Largely, this stayed true in 2024, but still, that tally rose from one to five. Michigan, Wisconsin, Nevada, and Arizona narrowly split their tickets last cycle. All four of the GOP’s candidates went down by less than 2.5%, and all but Kari Lake lost by 1.7% or less.

Naturally, that screwed me a bit. My Senate model forecasted a bullish 56-seat majority for the GOP in the senate. I correctly forecasted a win for the GOP in Pennsylvania; I had given then-candidate Dave McCormick a 56.42% chance of winning over incumbent Senator Bob Casey. The race was appropriately rated a “toss up.”

In Wisconsin, the forecast incorrectly predicted a victorious Eric Hovde, who lost by just 0.9% to incumbent Senator Tammy Baldwin. I had given Mr. Hovde a 52.09% chance of winning to the senator’s 47.91%. As a very close tossup race, I am not cross with myself for this coming up short. The world’s current best pollster, Atlas Intel, had the race hovering right around one percent; it had come to be seen as a near fifty-fifty proposition.

There is a very real phenomenon where voters tell polling firms little fibs. Many voters indicate in polls that they will split their ticket but never do. This is demonstrated by Senate candidates, time and again, outperforming their polling numbers with their final performance looking much like their party’s presidential candidate.

That’s why my forecast was so bullish on Kari Lake and Sam Brown. Respectively, they were given a 79% and 67% chance of winning—LOL.

But let’s revisit my logic here. Senate candidates had long been outperforming their polls during the Trump Era. Not by small numbers either. By leaps and bounds. Former Senator Martha McSally (R-AZ) polled horribly in the lead up to her eventual loss to Mark Kelly. The RealClearPolitics average had her down by 5.7% going into election day in 2020. If you look at the polls in the lead up to the election (especially in the middle of October), McSally looked poised to lose by near double digits. However, she lost by 2.4%.

Less dramatically, Thom Tillis looked set to lose his seat to challenger Cal Cunningham. Only two polls in the month of October had him winning. Moreover, most polls estimated he would lose by 3-4%—some as high as 10%. However, he went on to retain his seat by a 1.8% margin.

Even Susan Collins (R-ME) seemed on the verge of being sent to an early retirement. Not one poll had her beating the Democratic challenger. The polling average from September to election day had her losing by 5.5%. Well, she won by 8.6%! That is a huge gulf of 14.1%. Another bad beat for the polling industry.

Armed with those examples of a much broader trend, you might understand why the forecast expected a Sam Brown and Kari Lake overperformance if not victory—even in the face of cataclysmic polling. Until they moderated towards election day, most polls had those two candidates losing by between eight and thirteen percent. A potential double digit loss! In a swing state!

It sounded like more bogus polls. Turns out, that was true. They were bogus. But, directionally, they still ended up being correct about who won.

When Trump lost Michigan in 2020, he lost by about 3%. John James, the Republican senate candidate in Michigan, lost his race by about 1.7%. The aforementioned Thom Tillis (R-NC) won his race by 1.8% while Trump won NC by 1.4%. Former Senator McSally lost Arizona by 2.4% while Trump lost by 0.3%.

Time after time, the candidate for Senate and candidate for President performed remarkably similarly, as you would expect in an era where few voters split their tickets.

This stayed true in the three tossup seats in the Midwest in 2024. The separation between Hovde and Trump in Wisconsin was 1.8%. In Michigan, it was 2.7% between Trump and Mike Rogers. Lastly, in Pennsylvania, the GOP’s Senate and President candidates were separated by 1.5%. This falls right in line with expectations.

The Southwest didn’t get the memo. Trump won Arizona by 5.5% while Kari Lake lost by 2.4%—a 7.9% separation! In neighboring Nevada, Trump won by 3.1% while Sam Brown lost by 1.7—a 4.8% separation.

These two candidates uniquely underperformed, and the recruiters at the RNC need to reflect on why. (Moreover, reflect on why your senate candidates barely lost in Michigan and Wisconsin while you spent pallets of cash in Maryland).

I only released odds for the Senate, not final margin of victory estimates, so I cannot do the same comparison as I did for the presidency. However, we can still do some other comparisons.

Starting with the Economist, it had every single GOP swing state candidate going down by 4% or more. Kari Lake and Sam Brown were forecasted to go down by 6%! In the words of our current and former president, “Wrong!”

The Economist even had Bernie Moreno losing to incumbent Senator Sherrod Brown (D-OH) by 1%. Senator Moreno went on to win by 3.6%, which was a serious underperformance of Trump’s 11-point win in Ohio but a victory nonetheless. The Economist also had Senators Ted Cruz (R-TX) and Rick Scott (R-FL) winning by just five. The site was trying to tell you that these races would be closer than the races in swing states Nevada and Arizona!

Senator Cruz went on to win by 8% and Senator Scott by 13%.

The JHK forecast made similar blunders. It had the Democrat favored to win in Ohio—a 57% chance to win. It gave the Republican only a 84% chance of winning in Florida while giving the Democrat a 90% chance of winning in swing state Nevada; quite the comparison considering the GOP won by 13% in the Sunshine State while the DEM squeaked out a win in Nevada. A similar comparison could be made between the Senate races in Texas and Michigan.

Unfortunately, the comparisons end there. The Silver Bulletin did not release a senate forecast for 2024 as far as I can see. And FiveThirtyEight was closed by Disney earlier this month—announced just two days after I released this. Coincidence?

Overall, as much as those first two models can be critiqued, I cannot claim a victory over them. Going forward, I plan on adding more variables to the current Senate model, which got by with just one. For 2026, the model will also weigh whether or not a candidate in the race is an incumbent. Incumbents tend to win, and had that been included in the 2024, Senator Baldwin would likely have been correctly forecasted to win.

But that addition won’t fix the problems in the Southwest. When polls continuously show a large gap between the top of the ticket and the Senate candidate, this model needs to listen more. Figuring out how to accomplish that will be one of my several chores over the next 36 months.

In summary, this model deserves nothing more than a C. It did nothing to separate itself from the field, all of whom should also receive the same marks.

House Grade

Ah. At last, we arrive here. The grade for my House forecast. It's an F.

I threw this together in one day, but that is not an excuse, rather an embarrassing admission. It never should have been released.

Simply put, the model did not appropriately account for gerrymandering, which is a big oversight.

The model followed a similar train of thought as the Senate model: Compare how the presidential candidates did with the House candidates.

It did not work. It spat out a prediction for a 237-seat GOP-controlled House—just 17 seats shy of what they actually got! LOL.

I lament releasing it, but I am not as miffed about its performance as I am with the Senate model’s. I had actual confidence in the Senate model; in this one, I placed no faith.

With limited polling available for House races, I need to find other ways to add variables. I need to add an incumbency variable. I need to add a campaign finance variable. And I’ll think of more.

Summary

No surprise—where I put in the most effort, I performed the best. The presidential forecast will continue to be of primary emphasis among the three, but for 2026 and 2028, I will shore up the shortcomings of the other two.

For 2026, the task becomes further complicated without an anchor at the top of the ticket. Even in states with a gubernatorial contest, the lower federal races (House and Senate) are not tethered to it like they are with the presidential race. That’s how you get states like Kentucky electing a Democratic governor while having five of its six Representatives be Republican.

More updates to come shortly.


  1. 50 states and DC plus 5 because Maine and Nebraska reward electoral votes by congressional district and for the statewide winner.
  2. I actually think Trump would have won Virginia if Biden had stayed in… and Minnesota and New Hampshire. Maybe ME, NM, and NJ.