Political polling: are political polls good science

Political polling: are political polls good science – or science fiction?

Emily Costello

The Presidential election is still months away. But with all the campaigning going on, you’ve probably heard the phrase “The polls show… ” enough times to make you scream.

Political polls are certainly popular. But are they scientific? To find out, we talked to some pollsters about their methods. Here’s what we learned:

With a new poll coming out practically every day, one thing is clear: Pollsters can’t call each and every person in the country every time they want to know if the American public likes Bill Clinton or Bob Dole better. Not only would that be expensive and time-consuming, no one’s phone would ever stop ringing. So pollsters talk to a sample – a small portion – of the population instead.

Is that cheating? No, it’s mathematics, says G. Donald Ferree Jr., associate director of the Roper Center for Public Opinion Research. Pollsters use complicated formulas to prove that sampling works, but you can do the same with the following thought problem:

Imagine you have 1,000 jelly beans – 500 red and 500 blue – well-mixed in a jar. If you put on a blindfold and picked out 100 jelly beans, chances are you’d get pretty close to 50 red, 50 blue. Even if you ended up with 47 red and 53 blue, that would still be a good representation of what’s in the jar.


Of course, you might have a problem if your sample is too small. For example, if you only sampled four or six jelly beans, you might not get the 50-50 result.

Pollsters say they can accurately predict what the entire country is thinking about political candidates by talking to as few as a thousand people. Naturally, though, there’s a catch: Those one thousand people must be chosen at random (without a pattern).

Why? Well, imagine what might happen if a pollster only asked people in President Clinton’s hometown, Little Rock, Arkansas, what they thought of the President. Little Rock residents might be more likely than others to have positive things to say about their “hometown President,” so the accuracy of the poll would be blown.

To make sure their samples are random, political pollsters use a computer program to dial numbers. If they’re conducting a national poll, every phone number in the country has an equal possibility of coming up each time.

But even with a computer’s help, pollsters sometimes allow sampling errors to creep into their polls. Imagine a pollster does a survey during baseball’s World Series, Ferree suggests. The sample could be biased against baseball lovers, who might not bother to answer their phones during a big game. If baseball lovers are more likely to favor one candidate, the poll results would be skewed.

Even if political pollsters get a good sample, pollsters might unintentionally influence people by wording their questions a certain way or by putting them in a certain order. To see how, imagine that you’re being polled about your favorite sport. Say the first question is, “Which sport do you enjoy most? Walking, in-line skating, or sailing.” In-line skating is your clear choice. But what if the above question came after a series of questions about in-line skating injuries. Might your answer change?

The bottom line is that polls, like science, must be done carefully to produce meaningful results.

COPYRIGHT 1996 Scholastic, Inc.

COPYRIGHT 2004 Gale Group