← Back to blog

On the BBC's own metric, KPM-1 was the most accurate model in the UK polling field. We did it with a few hundred quid of compute. We were also beaten on council control. Here is what both findings mean.

The BBC's official Projected National Share for the 7 May 2026 English local elections has just published. KPM-1 had the lowest mean absolute error of any pollster in the field. We were also beaten on council-control accuracy by a specialist heuristic. Here is what the engine got right, what it got wrong, and what it is built for.

The engine in 60 seconds

KPM-1 is a synthetic consumer-prediction engine. It simulates 65,000 representative UK personas and returns reasoning at named-segment resolution. On 1 May 2026 we published predictions for every English council on the 7 May ballot. The SHA-256 hash of those predictions was committed to a public GitHub repository before voting opened. On 5 May we refreshed with a four-seed ensemble run and committed a second hash. Both still match the predictions JSON we publish today. Anyone can verify on github.com/Kronaxis/kpm1-election-projections.

We chose to run this test in public for one reason. Falsifiability is the only credibility primitive in polling that cannot be retroactively spun.

How KPM-1 compares

PollsterCostTimeMAE vs PNSOpen dataHash pre-regReasoning traces
YouGov MRP£80,000–£150,0003 weeks3.20ppNoNoNone
Opinium tracker£10,000–£20,000 / poll1 week3.20ppNoNoNone
10-pollster consensusn/an/a3.50ppNoNoNone
Electoral Calculus MRP£30,000–£50,0002 weeks5.00ppNoNoNone
KPM-1About £500 of computeMinutes2.96ppYesYes19,376 published

We are not on the same line as the established pollsters. We are on a different line. At three orders of magnitude lower cost. With reasoning depth nobody else publishes. And, on this run, with the lowest mean absolute error in the field on the BBC's own Projected National Share.

On the metric where the entire UK polling industry competes once a year, a few-hundred-pound synthetic-persona engine beat the £80,000–£150,000 MRP houses. We did it with the post-processor still broken.

What the engine actually did

Closest pollster on three of five parties. KPM-1 was the most accurate model on Labour (predicted 20.5%, PNS 20%, gap 0.5pp), Liberal Democrat (predicted 14.8%, PNS 17%, gap 2.2pp), and Green (predicted 13.1%, PNS 11%, gap 2.1pp). Close second on Reform UK (predicted 26.5%, PNS 30%, gap 3.5pp; Opinium led on this one at 3.0pp).

Five of five on national directional accuracy. Reform gaining, Labour losing, Conservative losing, Liberal Democrat gaining, Green gaining. Every direction call correct ahead of voting.

The Sunderland breakthrough — predicted on a 0.5 percentage point margin. Sunderland was the headline Reform-breakthrough story of the night. The council had been Labour-controlled for fifty years. KPM-1 called it for Reform UK on a toss-up margin of 0.5 percentage points. Reform took 39 of 66 seats. Even at the slimmest possible confidence band, our vote-share signal landed.

Reasoning traces published openly. 19,376 individual persona-level traces explaining who voted for whom and why. Per-council theme breakdowns. Tactical-switching flows. Direct persona quotes from named demographic segments. No traditional pollster publishes any of this. YouGov gives you a number. We give you nineteen thousand explanations.

Regional reasoning signatures captured correctly. Yorkshire and Humber personas over-indexed on Immigration by 8 percentage points and on Crime/Safety by 8 percentage points versus the national baseline. The strongest Reform-voter signature in our entire dataset. North East personas over-indexed on Tax/Spending by 18 percentage points. Both signatures aligned with the regional results.

Two findings that travel beyond this election. Cost of living was cited by 75% of personas nationally and by 71% to 82% in every region we modelled. The single most consistent driver of vote choice across the country. Of 19,376 traces, only 16 (0.1%) reasoned in tactical terms. The political-class narrative that tactical voting would swing this election is not supported in our trace data, and we will be defending that finding in the methodology paper.

The cost. About £500 of compute, run on our own hardware, finished in minutes.

Where we got it wrong

Two honest admissions. Both specific, both fixable, both dated.

Council-control accuracy. KPM-1 did not include "No overall control" as a possible council-control output. In a five-party fragmented landscape, NOC is one of the modal outcomes. We did not model it as a class. The model picked the highest-share party as winner regardless of how narrow the margin was. Roughly two thirds of our missed predictions are councils where we picked a clean party winner and reality fragmented to NOC.

PollCheck, the only other model that ran the same test publicly, sits at roughly 88% on council-control accuracy. KPM-1 sits at 28.5% on the full 130-council declared set, and just 1 of 10 on our highest-confidence "lean" track (10%). They beat us by roughly 60 percentage points on the headline metric and we are not going to pretend otherwise.

Conservative over-projection. KPM-1 said Conservative would land at 21.5% nationally. The PNS came in at 15%. We over-projected Tory vote share by 6.5 percentage points — the single largest per-party miss in our prediction set, worse than any traditional pollster. The Conservative collapse was sharper than our model captured. KPM-2 needs a more aggressive Tory-decline trend, calibrated against the Reform-displacement-of-Conservative signal we now have hard data on.

We also discovered our "lean" confidence ratings are not properly calibrated. When the model said "lean" rather than "toss-up" we should have been measurably more accurate. We were not.

The fix

KPM-2 ships by 31 May 2026 with six architectural changes:

  1. Probability-distribution output across {Reform, Labour, Conservative, LD, Green, Other, NOC}, summing to 1.
  2. Explicit NOC class with margin-based threshold rule (default <7pp → NOC). Applied retroactively to KPM-1 on the full 130-council declared set, this single rule moves council-control accuracy from 28.5% to 49.2% — a +20.8pp lift that held essentially unchanged as the sample grew from 51 declared to 130 declared.
  3. Bootstrap-based confidence calibration replacing the broken predicted-margin calibration.
  4. Pre-registered alternative post-processing rules. Multiple rules shipped at prediction time. Data adjudicates.
  5. Turnout and abstention model with per-persona turnout probabilities.
  6. Conservative-decline calibration informed by 7 May 2026 ground truth.

The methodology paper update will be published openly on or before 31 May. The new hash receipt for KPM-2 will go on GitHub before any future falsifiable claim. The 1 May 2026 hash receipt does not move.

Where we are now (11 May update)

It is four days since the post-mortem above. KPM-2 work has progressed faster than the 31 May milestone implied. Here is the honest state of the portfolio, with every number externally verifiable.

KPM-2.2 v15.1 — the council-fragmentation rule. A hand-crafted six-rule post-processor that sits on top of KPM-1's vote shares. On the same 130-council sample where KPM-1 lands at 28.5%, v15.1 hits 77 of 130 = 59.2% — a +30.7 percentage point lift over the synthetic-panel baseline by adding the NOC class, regional retain rules, and a fragmentation gate. Methodology hash 52df676e792c29c6…, frozen. Source and reproducibility test public at github.com/Kronaxis/kpm.

Continuous by-election engine. Every Thursday a UK council by-election happens, our engine ingests the ALDC results feed, generates a prediction at ward level, hashes it, commits it to the scorecard ledger before polls close, then auto-scores it against the result the following week. Twelve by-elections predicted to date — 9 hits (75%). The misses are documented alongside the hits in the same JSON.

Other elections. Backtests against the 2026 Scottish Parliament election (1/1) and the 2024 London Mayoral election (1/1) both hit. The 2026 Senedd backtest missed (0/1) — kept visible as the methodology's weakest cell. Sample sizes are small; these are early signals.

Ward-level methodology (v17 series). Real per-ward priors from the Democracy Club API, augmented with ONS Census 2021 demographics and Hanretty's 2016 Brexit constituency estimates, drive a per-ward Uniform National Swing projection with Reform-target detection and an ensemble fall-back to v15.1. Honest finding on a 40-council sample with full data coverage: v17.10 ties v15.1 at 29/40 = 72.5% — bootstrap 95% CI on the difference is [-17.5, +17.5] percentage points, so we cannot reject the null that the two methodologies are equally accurate at this sample size. The structural advantage of v17.10 is on Reform UK recall (catches roughly twice as many Reform council wins as v15.1 does on its broader sample), not on overall accuracy. Methodology hash 2ea86b8d1e25ee68…, frozen. Pre-registered for the 2027 county and 2028 metropolitan borough cycles — locked in advance of the test, no in-place tweaks allowed, machine-verified daily by CI.

What does NOT yet work. Both v15.1 and v17.10 have 0% recall for Conservative and Green council wins across the 130-council sample — 14 of 130 cases that neither methodology catches at all. The council-fragmentation rule structurally cannot output Conservative or Green as a council winner. Addressing this needs a successor methodology with separate Conservative-retention and Green-breakthrough models. That work is in scope for KPM-3, not KPM-2.

Cross-cycle 2024 backtest — attempted, infeasible at scale. We tried to validate v17.10 by predicting 2024 metropolitan borough actuals from 2022/2023 ward priors. 21 of 26 attempted councils (81%) had ward boundary review between 2022 and 2024 — ward names changed, so Democracy Club ballot IDs no longer matched. The methodology cannot be cross-cycle validated against 2024 for that reason. Documented honestly; not hidden. Genuine cross-cycle validation waits for the 2027 county election cycle.

Public verification of every number above.

Anyone with a curl client and a Python interpreter can reproduce every accuracy claim above from publicly committed code on a clean machine. That is the falsifiability discipline this post is built on.

Hire us for the right thing

If your question is "who will win the next English local elections", we are not yet the right answer on this evidence. PollCheck is. Use them this cycle. KPM-2 will be back next test.

If your question is any of the following, we are exactly the right answer today:

Those are consumer-research questions. Traditional MRP polling answers them in 4-8 weeks for £80,000 upwards. We answer them in minutes for £500 in compute, with reasoning at persona resolution and full open data.

The election was the public test. We chose to be measured. On the BBC's own metric of national vote-share accuracy, we beat the entire UK polling field. We were beaten on the council-control metric the engine was not built for. The pre-registration hash from 1 May has not moved. The methodology paper update with the NOC fix and the Conservative-decline recalibration lands by 31 May.

That is what running a public test on the hardest forecast in the UK calendar reveals about a synthetic consumer-prediction engine.

Hire us for the right thing.

Browse all 136 predictions →

Hash: 1fd2be14dc6e014809592408fe1e6b6d1a0f99b46f74e079ebdb52ba3dbd9c41 · GitHub: kpm1-election-projections · Methodology paper: DOI 10.5281/zenodo.19361059 · Commercial: jason@kronaxis.co.uk