The central challenge in modern polling is not simply measuring opinion. It is ensuring that the people we hear from reflect the people we do not.
"Are the people answering my surveys like the people who are not?"
— The question that always sits in the back of my mind as a pollster
Polling today is hard. Not because people are doing anything wrong, but because the environment we are all operating in has changed in ways that I think introduces persistent, structural bias into our samples.
The bigger issue is how people are recruited and who ends up in those panels. That process is not random, and it introduces systematic differences that are not evenly distributed across the electorate.
Telephone surveys, which were once considered the gold standard, now face response rates so low that they introduce their own form of bias. We are no longer reaching a broad cross-section of the public. We are reaching a subset of people willing to answer unknown calls and stay on the line.
Most pollsters, including us, weight by demographics such as age, gender, region, and education. That helps. But over time, we have come to believe that it is not enough.
If demographic weighting cannot fully correct for imbalance, what can? One answer is to look at behaviour, not just characteristics.
Asking people how they voted in the last election gives us a benchmark that reflects the actual electorate. It allows us to compare the partisan composition of our sample to a known outcome and adjust accordingly.
If a group of voters is systematically underrepresented in your sample, and that underrepresentation is not fully explained by demographics, then weighting to match the actual distribution of past vote can help correct for that gap.
We have been tracking recalled past vote since 2011. I took a sample of our surveys from May 2014 and April 2026 to conduct this analysis. Each survey asks respondents how they voted in the most recent federal election. Before we look at those answers, we weight the data by region, age, gender, and education to reflect the Canadian population. But importantly, we do not apply any past-vote weighting at this stage of our analysis.
That is deliberate. It allows us to see what a standard, demographically weighted sample looks like on its own, without any correction for partisanship.
In every single wave, across every single election cycle, Conservative voters are underrepresented in our demographically weighted sample relative to their actual share of the vote. Not in most waves. Not in some elections. In every case we can observe.
Two things stand out. First, this pattern is consistent. It is not driven by one election, one leader, or one moment in time. It holds across very different political contexts.
Second, the gap appears immediately after each election and persists throughout the cycle. If this were primarily a problem of memory, we would expect the gap to grow over time as recall becomes less reliable. But that is not what we see. The bias is present early and remains relatively stable within each parliament.
That suggests the issue is not primarily about what people remember or report. It is about who is in the sample in the first place.
Looking at our final polls in the last three federal elections, applying past vote weighting increased Conservative vote estimates, reduced Liberal vote estimates, and brought our numbers closer to the actual result.
| Election | Party | Without Past Vote Weight | With Past Vote Weight | Actual Result | Improvement |
|---|---|---|---|---|---|
| 2019 | CPC | 30% | 33% | 34.3% | +3 pts closer |
| 2019 | LPC | 36% | 34% | 33.1% | +2 pts closer |
| 2021 | CPC | 26% | 33% | 33.7% | +7 pts closer |
| 2021 | LPC | 37% | 33% | 32.6% | +4 pts closer |
| 2025 | CPC | 35% | 39% | 41.2% | +4 pts closer |
| 2025 | LPC | 45% | 41% | 43.7% | +4 pts closer |
In every election, past vote weighting moved our Conservative estimates upward and our Liberal estimates downward — consistently in the direction of the actual result. The 2021 election shows the most dramatic correction: a 7-point improvement in our Conservative estimate.
The 2025 election produced one of the most dramatic polling stories in Canada's political history — not because the outcome was surprising, but because different firms, measuring the same electorate at the same moment, arrived at conclusions sometimes twelve points apart.
In early January, on the eve of Justin Trudeau's resignation, every major polling firm was measuring a Conservative lead of between 25 and 29 percentage points. Whatever methodological differences existed, they were not producing meaningfully different pictures.
As Trudeau stepped aside, Mark Carney entered the Liberal leadership race, and Trump escalated tariff threats, the Conservative lead eroded across every tracker. By March 23, the average showed a Liberal lead of approximately five points. The direction was clear. The magnitude was not.
Pollster 1 was the clearest outlier on the Liberal-friendly side, averaging approximately seven percentage points above the consensus. In early April, while the average suggested a Liberal lead of around eight points, Pollster 1 was measuring fourteen.
At the other end, we and Pollster 2 were consistently measuring five to six points below the consensus. I raised what might be explaining this discrepancy in the lead up to the election.
The remaining firms — Pollster 3, Pollster 4, Pollster 5, and Pollster 6 — clustered closer to the average, though with their own tendencies.
Liberal–Conservative margin (LPC% − CPC%) by pollster, January–April 2025. Actual result: Liberal +2.5. Dates are approximate for pre-campaign polls.
The actual result — a Liberal margin of 2.5 points — sits below where even the most polling firms were measuring in the final weeks of the campaign. Did the electorate genuinely move toward the Conservatives in the final stretch? Or were even the tightest-reading firms still overcounting Liberal support due to persistent panel composition issues?
The pattern of the data favours a version of both. Some real movement likely occurred. But the size of the inter-firm spread, and the consistency with which methodologically similar firms clustered together, points strongly toward structural measurement differences as the primary driver of divergence.
It is important to be clear about what past vote weighting can and cannot do. It does not eliminate all sources of error. If there has been real, large-scale vote switching since the last election, past vote weighting can overcorrect by pulling people back toward their previous choice.
We saw this in 2025 with the NDP, where both weighted and unweighted estimates overstated support because of rapid vote movement during the campaign. Past vote recall is also imperfect. Academic research does find that some respondents misremember or reinterpret their past vote.
"I do not believe that other pollsters are doing anything wrong. Everyone in this industry is trying to solve the same problem with the tools and evidence available to them."
Our analysis suggests that, in the current cycle, a demographically weighted sample could be underrepresenting Conservative voters by about 5 to 7 points. If that structural gap is not corrected, polls will tend to show a larger Liberal advantage than may actually exist in the electorate.
This helps explain why our polling has often shown a tighter race than others. The difference is not what people tell us about their current vote. It is how we adjust the composition of the sample.
Past vote weighting remains imperfect. But it is, in our view, a better option than relying only on region, demographics, and education. And for that reason, we will continue to do so until we find a better way to correct for a persistent issue we have seen in our data for years.
Our mission is to understand people and turn that understanding into insight that leaders trust, use, and act on.
We don't just collect data, we interrogate it. Our commitment to debate and discussion reflects a research culture that prioritizes accuracy over convenience.
In the 2025 federal election, our final poll showed a Liberal lead of +2, one of the closest to the actual result of +2.5 among major polling firms. This was not luck. It was the product of a decade of methodological refinement.
Numbers without context are just noise. We combine rigorous research with clear judgment and influential storytelling to help leaders understand not just what Canadians think, but what it means for their decisions.
Whether you need public opinion research, market insights, or strategic counsel, Abacus Data delivers the rigour and clarity your decisions demand.