Election season is always inundated with political polling. Dominant candidates fancy polls as the talisman of victory; underdog candidates despise polls as trivial and inconsequential. Rarely will two polls match identically at any given point in time. Some polls consistently hook left, others predictably slice right. Even if you decide to stick with a preferred polling source, their numbers can longitudinally rise and fall like the a roller coaster from one week to the next, begging the question:
Is there any legitimacy to polling at all???
Of course there is. In fact, it’s cold, hard science. Let’s break down some of the key terms and main concepts used in political polling so that you can learn to better interpret polls. If you aren’t keen on the nitty gritty, do not let yourself get bogged down here, just skim through and remember the bold statements.
First & foremost a poll is just a sample. A sample is a small part of a larger group, called a population. The sample represents the larger group. For example, imagine a chef making spaghetti sauce. Does the chef have to drink the entire pot to know what the sauce will taste like? No. He can take a small taste from a well-stirred pot and assume that all the sauce will taste the same. So it is with political polling. Populations take many forms, from registered voters, likely voters, even the entire population of the United States.
What do polls measure? A survey is a tool used to measure the respondents’ positions on an issue. Surveys are filled with the tools of a pollster’s trade, from multiple choice questions, true/false questions, or even more complicated scale questions.
Pollsters useÂ sample size calculators to determine how many individuals need to respond to the survey. Political polls will generally have between 1000 and 3000 people who respond per survey. Generally, the more people surveyed, the smaller the margin of error and/or the larger the population. Because interviews are costly and time-consuming, polls use the minimum number of interviews possible. Polls often refer to the number of respondents as the “N-size,” so in the fine print if the details say, “N=2,2024 likely voters” then we can conclude 2,024 likely voters composed the sample group, and that the population is all likely voters defined within a certain geographic region.
What is margin of error? Margin of error is the mathematical difference between mean response of the survey and the population mean. This is often expressed in over/under terms that look like this ± , and is pronounced “plus or minus.” So if, for example, 80%±3% of sample respondents stated that “Yes, I would vote for state proposition 744,” then we can conclude that if we interviewed every single person in the population, we would find that between 77% and 83% would also vote yes. The larger the margin of error, the greater the possible variance between the population mean and the sample mean.
The last major component of polling is the confidence level. Confidence level is an indication of the reliability of the study. If a study is 80%±3% with a 95% confidence level, then we know that if the survey were conducted 100 times in a row, we would find the same results 95 of the 100. Therefore the higher the confidence level, the more reliable the findings.
Surveys are unilaterally composed of these components. Cold, hard science. But even with the science of sampling, and all the vast amounts of campaign funding available, polls are inconsistent. Why?
Pollsters don’t always do their homework when designing questionnaires. A major source of the discrepancies between polling agencies comes fromÂ questionnaire design. Proper questionnaire design eliminates bias and uses the right question to accurately measure a population’s true position on a candidate or issue. If polling agencies use garbage questions, then the results will be garbage. When creating a questionnaire, pollsters should keep questions short & sweet – to the point with direct and clear language. Many research companies disclose their actual questionnaires. If you are reading about a survey online, always click on the source of the political poll and read more.
Polling error also occurs when samples are too small, or poorly chosen (not representative). Results are frequently analyzed improperly by politicians or media. However, even with all the limitations,Â the foundation under political polling is strong. Learn to acknowledge the limitations of each survey and polling agency.
Critically review polls, not accepting any single poll at face value, and you can use them as a forecast- often predicting election day results.
Good businesses use the same sampling principles to collect information about customer satisfaction, employee satisfaction, market awareness, and a host of other important issues. Surveys are a powerful tool that, when used properly, can help uncover information critical for decision making – a superior alternative to following intuition.