I’ll announce the winners from my election pool later this week. One of the questions there asked which polling company’s final poll numbers would hit closest to the mark.
You can browse the numbers here. To pick a winner, I simply added up the difference between their numbers and the results, producing the following:
Angus Reid: 5.2%
Nanos: 5.5%
Ipsos: 6.0%
Decima: 6.4%
Leger: 7.2%
Abacus: 9.0%
Forum: approx 9% (BQ and Green numbers extrapolated)
Ekos: 10.3%
Compass: 14.0%
So congrats to those of you who picked Angus. The top 6 companies on that list were within the margin of error on their numbers, so they too deserve a round of applause.
As for the seat projections, here’s the total seat miss:
Riding by Riding: 52
LISPOP: 56
Calgary Grit: 56
Ekos: 58
Democratic Space: 58
Trendlines: 59
308.com: 98
Election Prediction Project: 118
So a similar performance by all the mathematical models, except for 308 who has already offered a brief post mortem. I will add that my prediction was further off the mark from my projection – I made the same faulty assumptions as the EPP did, assuming strong incumbents could hold their seats.
The largest problem with my projection was the polls it fed off – specifically the low Conservative numbers (which I did foresee as a potential problem). If I plug the actual numbers in, my model projects: CPC 168.8, NDP 94.6, Lib 34.0, Bloc 10.1. The regional splits break down nicely too, except for Quebec where I’m a bit high on the Bloc and low on the NDP.
But this model was supposed to handle pollsters missing the mark. A few of the results fell outside the 95% confidence interval so this is, as Jack Layton would say, a hash tag fail.
I’ll put this one to bed for a bit and start tinkering again over the summer, but I think this speaks to the limitations of any seat projection model. They’re useful tools, but it’s incredibly naive to assume they can predict the total seat count, much less individual riding results.
But that’s ok. If they worked, it would make election nights a bore.