← Back to Blog

How We Project the 2026 Local Elections: Every Step Explained

The V2 results are now live. Read the full 136-council projections covering council control, seat changes, and national vote share.

The one-sentence version

We built 65,000 fake British people from census data, gave each one a personality, asked 200 of them per council how they'd vote, then corrected the results using real election data, referendum results, and demographic statistics.


Step by step

Step 1: Build the fake people

We start with the 2021 Census. We know how many 55-year-old white women with a degree live in Birmingham, how many 22-year-old Pakistani men without qualifications live in Bradford, and so on for every combination of age, gender, ethnicity, education, income, and housing across 650 constituencies.

We generate 65,000 fictional people whose demographics, in aggregate, match the real population. Each one gets a name, a town, a job, a salary, a housing situation, and a household.

Each person also gets a personality profile across eight dimensions (our DYNAMICS-8 framework): how disciplined they are, how agreeable, how open to new ideas, how analytical, how emotionally volatile, how impulsive, how blunt, and how sociable. These are scored 0-1 and assigned based on demographic correlations from published psychology research.

Each person gets a political profile: which party they lean towards, how engaged they are (1-5), what issues matter to them, and who they voted for in 2019 and 2024.

Step 2: Pick 200 people for each council

For each of the 136 councils being contested, we pick 200 people whose demographics match that area. We don't pick randomly. We weight the selection:

We also remove anyone whose last-voted party isn't standing in this council. No point asking a Scottish National Party voter about a Birmingham election.

Step 3: Ask them how they'd vote

Each person gets a prompt that says: here's who you are, here's where you live, here's what happened last time, here's what's happening in the country right now. How would you vote?

They answer in three rounds:

Heart: Which party best represents your values? (Ignore who can win.)

Head: Given who can actually win here, would you vote differently? (Tactical voting.)

Final answer: Considering the national mood and how angry you are with the government, what's your actual vote?

They also rate their likelihood of voting on a scale of 1-10. If they say 1-3, we throw them out (they wouldn't actually vote). If they say 4-6, we count their vote at half weight.

The party order is shuffled randomly for each person so the AI doesn't just pick the first one listed.

Step 4: Count the votes

We now have 200 votes per council. But raw vote counts from fake people are noisy and biased. The AI tends to over-predict the biggest party, under-predict small parties, and ignore local dynamics. So we correct the results.

Step 5: Apply 14 corrections

This is where the engineering matters. Each correction fixes a specific known bias:

Statistical cleanup:

Local election dynamics:

Grounding in reality:

External data corrections:

Smoothing:

Step 6: Classify confidence

Each projection gets a label:

If a party jumps from under 10% last time to winning, we automatically downgrade to Toss-up. That kind of swing is possible but suspicious.

Step 7: Calculate win probabilities

Instead of just saying "Reform wins Barnsley," we say "Reform has a 70% probability of winning Barnsley." We do this by resampling our 200 votes 1,000 times (pulling random subsets) and counting how often each party comes out on top. This captures the genuine uncertainty from having only 200 people.

Step 8: Cross-check with a Bayesian prior

Before we even ask fake people, we build a "prior expectation" for each council using only hard data: previous election results, national polling swing, Brexit vote, deprivation index, ethnic composition. This gives us a best guess with no AI involved.

We compare the AI's answer against this prior. Where they agree, we're confident. Where they disagree, one of them is wrong and we flag it for review.

Step 9: Sensitivity test

We run the entire model six times with different assumptions: Reform stronger, Reform weaker, Green surging, strong anti-incumbent mood, status quo, and our baseline. If a council gives the same winner in all six runs, the projection is robust. If it flips, it's genuinely uncertain. Out of 136 councils, 135 are stable. Only Croydon changes winner depending on assumptions.


What we tested it against

We ran the model (without the AI, just the statistical corrections) against 38 councils where we already know the real result from 2022-2024.

The 5-point gap means our corrections are slightly over-fitted to the councils we built them for. We're honest about this.

The main misses: local independents we can't predict (Havering, Tower Hamlets), outer London marginals where Labour and Conservative are within 3 points, and one council where the same party has two different labels (Labour vs Labour Co-op).

For comparison, the simplest possible model (just apply national polling swing to every council uniformly) predicts Reform winning 98 out of 136 councils. That's obviously wrong. Our model predicts Reform 45, Labour 37, Lib Dem 24, Conservative 21, Green 9. That's plausible.


What data we use

DataWhat it tells usWhere it's from
Census 2021Who lives whereONS
Previous election resultsWhat happened last timeWikipedia, BBC
National pollsThe current national moodPublished polling averages
Brexit referendum 2016Which areas lean ReformElectoral Commission
Deprivation indexWhich areas are strugglingONS
Ethnic compositionWhere the Gaza effect appliesCensus 2021
Candidate countsWhich parties are actually standingDemocracy Club
By-election resultsCalibration correctionsPublished results

What could go wrong

RiskHow likelyWhat we do about it
Fake people don't vote like real peopleUnknownThis is the fundamental untested assumption
Tactical voting (people coordinating)HighBlend with statistical model
Late campaign event shifts everythingModerateRun predictions as late as possible
Green Party still underpredictedHighNational correction + surge councils
Brexit-Reform link has decayed since 2016ModerateDampened coefficient, capped
No historical data for 98/136 councilsHighNational polls + demographics + Brexit data
Our corrections are tuned to London councilsHighLOO cross-validation measures the overfitting
A popular independent candidate we can't predictHighNothing we can do. Acknowledged.

The numbers

What happens on election night

We've identified the 20 councils that declare first (Sunderland at 11:30pm, then London boroughs, then northern cities by 4am). We have 8 bellwether councils with specific things to watch for ("if Reform wins Sunderland by 15+ points, they're having a great night nationally"). We have a live tracker that records real results against our projections as they come in. And we have three pre-written analysis pieces (great result, decent result, poor result) ready to publish within hours.

Our success criteria, published before the election:

If we miss these targets, we publish exactly what went wrong and why. The pre-registration hash proves we didn't change our projections after seeing results.


This is a scientific experiment with a publicly stated hypothesis, not a marketing exercise with retrofitted claims.

Full data, methodology, and pre-registration hash available on GitHub:

View on GitHub