1. Polling and Ballot Box Scotland

This page contains a (relatively) simple explanation of how polling works, and is intended to answer some of the questions Ballot Box Scotland regularly receives on Twitter. Before getting into how polls themselves work, it’s worth briefly touching on how Ballot Box Scotland itself relates to polls.

1.A. It's Not My Poll

Most folk who follow Ballot Box Scotland are aware of this, but occasionally someone will dismiss a poll because they’ve never heard of me. Rarely, and much to my amusement, I’m actively rubbished as “some guy with a laptop in a basement!” (For the record, I actually live in a flat at the bottom of a hill which is raised substantially from street level, so I get a lot of natural light.)

Obviously, this is silly, because I don’t run polls myself. I only report on polls conducted by actual polling agencies! I have neither the expertise nor resources to run polls of my own, and I’d also never be caught dead running a self-selecting Twitter poll and reporting that as if it were accurate. I may in future commission some polling, but such polls will be conducted by a business actually equipped to do so.

1.B. Reputable Pollsters Only

Polls reported by Ballot Box Scotland come exclusively from British Polling Council accredited pollsters. BPC members have to follow certain standards of transparency and methodology, and are therefore pretty trustworthy. The most prominent non-BPC pollster is Lord Ashcroft, though he exists in a weird sort of grey zone of doing poll design and weighting himself but subcontracting out the actual running of the poll to another company, which may well be BPC-accredited.

Although BBS has at least once reported on an Ashcroft poll in the past, that was without realising he wasn’t a BPC member. The poll in question has been removed from BBS trackers and future Ashcroft polling is very unlikely to be covered.

1.C. Holyrood Focus

As part of the Ballot Box Scotland mission is to improve understanding of and engagement with the specifically Scottish sphere of politics that has arisen since Devolution, the most important polls are those for the Scottish Parliament. Only polls including Holyrood voting figures are guaranteed to receive full BBS coverage in the form of an analysis post on the website.

Polls with any combination of Westminster, Independence, EU/Brexit or other issues without Holyrood figures are only guaranteed Twitter coverage and being logged in any relevant tracking tables. A non-Holyrood poll will only get a full analysis post if I feel circumstances demand it.

2. Sampling - How 1000 Respondents Represent Scotland

Look at the replies to any (widely seen) tweet reporting polling figures, and you’ll see people dismissing the poll out of hand for only having asked about 1000 people, or saying that they personally have never been asked. Whilst it’s perfectly reasonable to wonder how 1000 people can accurately reflect the views of millions, all too often it’s a partisan dismissal of results people don’t like rather than an actual question. This page assumes good faith, of course!

2.A. They Can't Ask Everyone

Starting with the easier issue, there are two reasons why you haven’t been asked. The first is simply that there are a lot of people in Scotland – about 4 million voters, in fact. If every poll has around 1000 respondents, and every single person in Scotland was to be polled once, there’d need to be 4000 polls. That’s a completely impractical polling intensity, and one I’m glad doesn’t exist as I’d never be able to keep on top of them! Scottish Parliament polls in the period since the last election in 2016 to June 2020 when this page was published have numbered a much more manageable 36.

The second reason is how polls are conducted. Some polling organisations still do the classic “dial random numbers”, which is often reliant on having a landline phone for you to be picked. If you haven’t got a landline, you can’t be polled. Other pollsters make use of online panels. Participants in panel polls are generally selected based on known characteristics (for example their age, or stated past voting behaviour) to build a sample that is representative of the overall population. If you aren’t registered with the agency, you can’t be picked for their panels. There are pros and cons to each way of building a sample, but both are pretty accurate when done well.

2.B. Why 1000?

Moving onto why 1000 people is a sufficient number, the short answer is “because the maths tells us it is.”

The longer answer is basically that for statistical reasons that I don’t personally know in great depth myself, it has been consistently proven that once you’re dealing with an overall population in at least the tens of thousands, a random sample of 1000 of them will be a reasonably accurate reading of opinion. The key term there is “random”, and we’ll return to that shortly. To be precise, what the maths tells us is that for a random sample of 1000 people, we can be 95% certain that the polled figure falls within a margin of error of 3.1% of the actual number. That means if 50% of people say they’ll vote a certain way, we can be pretty sure the actual figure is between 46.9% and 53.1%.

If you want a smaller margin of error, your sample size increases exponentially. To get a nice round 3%, you need to poll 1067 people. To get down to 2%, you need 2400. And to get to 1% you need a whopping 9581. You can use this very useful tool to check these figures for yourself. Obviously, if a polling firm is sampling more people, the time and costs associated with the poll increase. Generally speaking it isn’t worth more than doubling your sample size just to shave another 1% off the margin of error, never mind trying to aim for 1% or lower. The 3% suffices for most normal polling purposes, so that’s what is widely used.

One other thing to bear in mind about that margin of error is that it actually gets smaller the further away from 50% a polled figure is. This isn’t widely stated because it isn’t that necessary to know, but what it means is that if a poll finds a party on 3%, that doesn’t actually mean the real figure could be anywhere between 0% and 6%. Using that same tool again, in a poll of 1000 folk, 3% of whom say they are voting for that party, the margin of error is 1.06% – or in other words, the real figure is 95% likely to be between 2% and 4%.

Note that as Ballot Box Scotland only reports on reputable polls, I don’t typically state the sample size. It’ll always be at least that 1000, and most readers won’t really be that interested in whether the sample is large enough for a 3.1% or 3% or 2% error. For those who are savvy to sample sizes and so on, I take it for granted they know everything on this page anyway.

2.C. A Carefully Selected and Weighted Random

As stated above, the most important thing for polling accuracy is that you have a random sample. Counter-intuitively, this often requires pollsters to very carefully select their sample. There are a lot of things that might make what you’d assume to be “random” unrepresentative of the population.

For example, if you were to go into Glasgow and stop 1000 people on Buchanan Street between 2pm and 4pm on a weekday completely at random, you haven’t actually got a truly random sample. You aren’t going to be picking up people who are working at that time, or have to get the kids from school, and most of them are going to be from Glasgow or nearby. If you asked them about their Scottish Parliament voting intention, it wouldn’t even be properly representative of Glasgow, and certainly not Scotland as a whole.

So getting a sample that’s actually random can require some work. We know that Glasgow has about 12% of Scotland’s population, so if you were polling 1000 people about the Scottish Parliament, you’d only want around 120 of them to be from Glasgow. We also know about 45% of people voted for the SNP in the December 2019 UK Election, so you also want your poll to have around 450 people who tell you that’s how they voted then. You also want to make sure you’ve got an appropriate gender balance and age distribution too. Even with the most careful selection, it’s still possible to end up with an unrepresentative sample. That’s where the concept of “weighting” comes in.

Say that 55% of respondents to a poll say they voted SNP in 2019. The relative “weight” of those responses will be reduced so that they instead amount to roughly 45%, to match the known result. Similarly, if only 30% of respondents were women, those responses will be weighted more highly to reflect the actual gender balance of the country. Weighting can similarly be applied to age, class, education, location and so on. In this way, polling agencies do try very hard to make sure their polls are as accurate a reflection of the overall population as they can be, and thus as close to “random” as possible.

3. Subsamples - Just Say No!

As part of the transparency required of members of the British Polling Council, reputable pollsters will make the “tables” for their polls available. These explain the methodology of the poll, as well as showing the makeup of the sample and the impact of weightings. The tables will often therefore have breakdowns of finding by various demographics, including age, gender, past vote, and so on. These broken down findings are referred to as “subsamples”, as they are a small sub-set of the overall sample. What is extremely important to bear in mind is that these are not accurate as standalone findings.

As discussed in 2.B. above, to get a margin of error within about 3%, you need to ask 1000 people. The most common subsample Scottish folk might encounter is a Scottish subsample of a GB poll – remember that NI has its own party system, so is not generally included in polls of Westminster voting intention in the rest of the UK. Scotland has about 8.4% of the GB population, so you’d expect a Scottish sample to be about 84 folk in your average sample size 1000 poll. Assuming that was somehow still a perfectly random 84 Scottish voters, the margin of error at that sample size is a whopping 10.7%.

However, it isn’t necessarily always going to be the case it would be a perfectly random sample. These subsamples also aren’t weighted within themselves – the Scottish subsample could also have more older respondents than the normal voting population, for example. That’d further impact on the accuracy of the subsample. The same cautionary tale applies whatever kind of subsample you’re looking at, with age being another popular one. That’s why subsamples should almost always be avoided, and you should definitely ignore anyone making a big deal out of them.

It is important to note that this doesn’t mean that the SNP (or Plaid, for Wales) figure in GB polls is inherently inaccurate – it’s just as accurate as any other party, within the bounds of a GB-wide result. Additionally, when summed up over multiple polls, a relatively stable finding can be trusted. But it does mean that if you want an accurate sense of Scottish (or Welsh) polling within its own context, you need specifically Scottish (or Welsh) polls.

4. There's No Business in Falsehoods

Another common critique of polling is that a given polling agency can’t be trusted, as it’s biased in favour of one party or another. Often, this is accompanied by triumphant pointing out of the political affiliations (both real and perceived) of senior figures involved in that company. The example par excellence is YouGov, which counted a Conservative MP amongst its founders. Apart from the fact the people who founded and may run a business at a senior level aren’t the ones doing the hard graft, this is an overblown worry.

It shouldn’t actually surprise us that people who have some degree of involvement in politics would also be involved in polling agencies – yet somehow it does. It’s a bit like how the occasional person that pops up to claim Ballot Box Scotland must be biased, given I’m transparent about my personal party involvement (which I firmly believe I manage keep from influencing my BBS coverage!) If you think about it, obviously anyone who is interested enough in politics to provide some form of coverage will also have opinions about what they are covering. It’s complete fantasy to imagine that people with no interest in politics would provide coverage of it at all, never mind good quality coverage.

What’s important is not that people have those affiliations and sympathies, but instead whether they are capable of doing their job without being prejudiced by their views. Just like BBS has more to lose from openly favouring one party or constitutional position, so too do pollsters. In my case, doing so would cause loss of interest (and thus people to bicker with about electoral reform on Twitter) and donations. In the pollsters case, the risk is actually that their entire business collapses.

Political polling is undoubtedly what catches the most public attention, but it’s actually only a small part of each polling agency’s business. Most of the business is actually market research, which is privately commissioned and not published. YouGov, for example, state that political polling makes up less than 10% of their business. As someone who is signed up to YouGov and has been asked about TV, financial services, cars, and brands many more times than I have politics, I find that perfectly believable. The combination of the high public salience and low business worth of political polling discourages dishonesty.

If a pollster was to deliberately manipulate their polling, their divergence from other pollsters would likely become apparent very quickly, and then definitively proven wrong by the time of the actual election or referendum. Being very publicly wrong would cause substantial reputational damage, and drive the clients who account for most of the company’s business to competitors with a better track record. Far be it from being deliberately wrong, cases of polling failure such as the 2015 UK General Election have induced a massive amount of concern within the sector, and very public attempts to find the problem, fix it, and restore confidence.

With that in mind, rather than being a tool for public manipulation as detractors claim, political polling is more like extremely cost efficient advertising for the quality of your service. If you’re getting the political stuff right, people notice, and that will attract customers to your company. It sounds cynical, but there’s more money to be had from being truthful than there is from lying!

5. Commissioning Isn't Conducting

Sometimes it isn’t the polling agency people take issue with, it’s the client that has commissioned the poll. Whether it’s a newspaper or an organisation with known partisan leanings, their name associated with a poll can see opponents dismiss the whole poll as naturally rigged. Again, this is something with very little basis in reality. Anyone can commission a poll, but as discussed in section 4 above, there’s very little business incentive for the pollster to seek to manipulate it, regardless of client.

Apart from anything else, most of the polls covered by Ballot Box Scotland are “standard” voting intentions, things like Holyrood, Westminster and Independence. A given pollster will typically have a long-established way of putting those questions, which are similar if not identical between companies, rather than the client setting the wording.

Even where the client has set wording for a question, pollsters can be expected to push back against anything that would be leading or biased. They aren’t necessarily always successful, and I’ve come across a few questions that I’m pretty surprised have gotten through in my time, but broadly speaking they do a good job of this. There’s a much greater risk of this happening on non-standard questions rather than on voting intentions, so even if you’re finding some bits of a poll a bit off, the actual election related portion is almost certainly fine.