Internet Surveys: Bad Data, Bad Science and Big Bias

Loading

Falacious reasoningBack in 2012, I wrote a blog piece about internet polls and surveys, asking whether internet polls and surveys could be – or should be – considered valid or scientific. I concluded, after researching the question, that, since the vast majority lack any scientific basis and are created by amateurs – often with a goal to direct rather than measure public opinion – that,

Most internet polls are merely for entertainment purposes. They harmlessly allow us to believe we are being engaged and are participating in the process, and they make pollsters happy to receive the attention. They are, however, not appropriate tools for making political or social decisions unless they are backed by rigid, scientific and statistical constraints.*

Earlier that year, an editorial in a local newspaper wisely drew similar conclusions (emphasis added):

It’s said that there’s lies, damn lies, and statistics. You could also throw Internet polls into that mix. …

But anyone who takes the results to heart, or attributes any level of credibility to them, is horribly mistaken. We post those polls to gauge how the community feels about one issue or another, but otherwise there is little to no scientific basis to them.

And unlike a poll that’s usually conducted by the likes of Ipsos, Leger and Gallup – who use scientific principles to conduct many of their public-opinion polls – the results of an Internet poll can easily be ‘plumped’ by one side or the other enlisting family, friends and associates to vote, regardless of their level of understanding of an issue.

I wanted to follow up my earlier piece with some more information from the professionals in the polling and statistical analysis fields, as well as some journalistic comments. The reasons internet polls are unscientific and lack credibility has been addressed by many universities, professional polling companies and associations.

The National Council on Public Polls weighed in on this question, stating (emphasis added):

While different members of NCPP have different opinions about the potential validity and value of online surveys, there is a consensus that many web-based surveys are completely unreliable. Indeed, to describe them as “polls” is to misuse that term.

The NCPP then suggests a list of ten questions for journalists to ask to help clarify the results and the scientific methods used to create and assess the poll:

  1. Is the internet-based survey designed to be representative, and if so, of what population? If not, it is not worthy of being reported.
  2.  What evidence is there that the sample is representative of the population it claims to represent? Unless the internet-based survey can provide clear evidence that the sample is representative by demographic and/or other relevant information it is not worthy of being reported.
  3. How was the sample drawn? Many internet-based surveys are just “call-in” polls or are asked only of people who happen to visit a particular web site. These surveys usually do not represent or make any pretense to represent any other population, and are not worthy of being reported.
  4. What steps does the organization take to prevent people from voting more than once? Any poll which allows people to vote twice, or more often, is not worthy of being reported.
  5. How were the data weighted? Survey data may contain biases from a variety of causes. The magnitude of these biases and random errors are usually unknown to the researcher. Even so, weighting may minimize these biases and errors when there is a strong relationship between the weighting variable and data in the survey. If there is not a strong relationship weighting may make the survey results worse. Demographic weighting of internet-based surveys is essential but is not sufficient. Some firms, in addition to demographic weighting, are weighting on other variables in an attempt to reduce the biases of online data.
  6. What is the evidence that the methodology works and produces accurate data? Unless the organization can provide the results of their other internet-based surveys which are consistent with other data, whether from the Census or other surveys, the survey results are not worthy of being reported.
  7. What is the organization’s experience and track record using internet-based polls? Unless the organization can demonstrate a track record of obtaining reliable data with other online surveys, their online surveys should be treated with great caution.
  8. What is the organization’s experience and track record as a survey researcher using traditional survey methods? If the organization does not have a track record in designing and conducting surveys using the telephone or in-person surveys, it is unlikely that they have the expertise to design and conduct online surveys.
  9. Does the organization follow the codes of conduct of AAPOR, CASRO, and NCPP (whether or not they are members)? If they follow none of these, they are probably not a qualified survey research organization. The more of these Codes they follow, the more likely their data are to be reliable and be trusted.
  10. Is the organization willing to disclose these questions and the methods used (as required by the codes of conduct referred to in #9 above)? If the organization is unwilling to disclose, or unable to provide, the relevant information the survey is probably not worthy of being reported.

The NCPP also has a list of 20 questions journalists should ask about all poll results. They state in the introduction (emphasis added):

The only polls that should be reported are “scientific” polls. A number of the questions here will help you decide whether or not a poll is a “scientific” one worthy of coverage – or an unscientific survey without value. Unscientific pseudo-polls are widespread and sometimes entertaining, but they never provide the kind of information that belongs in a serious report. Examples include 900-number call-in polls, man-on-the-street surveys, many Internet polls, shopping mall polls, and even the classic toilet tissue poll featuring pictures of the candidates on each roll.

One major distinguishing difference between scientific and unscientific polls is who picks the respondents for the survey. In a scientific poll, the pollster identifies and seeks out the people to be interviewed. In an unscientific poll, the respondents usually “volunteer” their opinions, selecting themselves for the poll.

The results of the well-conducted scientific poll provide a reliable guide to the opinions of many people in addition to those interviewed – even the opinions of all Americans. The results of an unscientific poll tell you nothing beyond simply what those respondents say.

Similarly, the AP Stylebook (a resource used by tens of thousands of journalists), says reporters should ask these questions when presented with results from any poll or survey:

Information that should be in every story based on a poll includes the answers to these questions:

  1. Who did the poll and who paid for it?
    (The place to start is the polling firm, media outlet or other organization that conducted the poll. Be wary of polls paid for by candidates or interest groups. The release of poll results is often a campaign tactic or publicity ploy. Any reporting of such polls must highlight the poll’s sponsor, so that readers can be aware of the potential for bias from such sponsorship.)
  2. How many people were interviewed? How were they selected?
    (Only a poll based on a scientific, random sample of a population – in which every member of the population has a known probability of inclusion – can be used as a reliable and accurate measure of that population’s opinions. Polls based on submissions to Web sites or calls to 900-numbers may be good entertainment but have no validity. They should be avoided because the opinions come from people who select themselves to participate. If such unscientific pseudo-polls are reported for entertainment value, they must never be portrayed as accurately reflecting public opinion and their failings must be highlighted.)
  3. Who was interviewed?
    (A valid poll reflects only the opinions of the population that was sampled. A poll of business executives can only represent the views of business executives, not of all adults. Surveys conducted via the Internet – even if attempted in a random manner, not based on self-selection – face special sampling difficulties that limit how the results may be generalized, even to the population of Internet users. Many political polls are based on interviews only with registered voters, since registration is usually required for voting. Close to the election, polls may be based only on “likely voters.” If “likely voters” are used as the base, ask the pollster how that group was identified.)
  4. How was the poll conducted – by telephone or some other way?
    (Avoid polls in which computers conduct telephone interviews using a recorded voice. Among the problems of these surveys are that they do not randomly select respondents within a household, as reliable polls do, and they cannot exclude children from polls in which adults or registered voters are the population of interest.)
  5. When was the poll taken?
    (Opinion can change quickly, especially in response to events.)
  6. What are the sampling error margins for the poll and for subgroups mentioned in the story?
    (The polling organization should provide sampling error margins, which are expressed as “plus or minus X percentage points,” not “percent.” The margin varies inversely with sample size: the fewer people interviewed, the larger the sampling error. Although some pollsters state sampling error or even poll results to a tenth of a percentage point, that implies a greater degree of precision than is possible from a sampling; sampling error margins should be rounded to the nearest half point and poll results to the nearest full point. If the opinions of a subgroup – women, for example – are important to the story, the sampling error for that subgroup should be included. Subgroup error margins are always larger than the margin for the entire poll.)
  7. What questions were asked and in what order?
    (Small differences in question wording can cause big differences in results. The exact question texts need not be in every poll story unless it is crucial or controversial.)

Michael Link and Robert Oldendick. in their article “Good” Polls / “Bad” Polls — How Can You Tell?: Ten Tips for Consumers of Survey Research“, note that,

Surveys are increasingly used for ulterior purposes, such as soliciting money, creating news filler, marketing, and even shaping opinions on certain issues (see sidebar #2). Consumers of survey research need an understanding, therefore, of why the study was conducted. Sometimes this will require the consumer to look past the stated objective of the survey and examine the context and manner in which the results are presented or published…

It doesn’t take an advanced degree in statistics to become an astute consumer of survey research information. It does, however, take a basic understanding of the process involved and an awareness of the potential problems posed by this method of gauging public attitudes. The best advice for survey consumers, therefore, is “buyer beware.”

Sidebar 2, referenced in the quote above, is about pseudo poll, which says (emphasis added):

The Growth of “Pseudo-Polls” The high demand for public opinion data has led to the growth of what some in the survey industry have labeled “pseudo-polls.” They include efforts such as 1-900 call-in polls, clip-out or write-in polls, political “push polls,” and internet polls to name a few. The major problems with such efforts tend to be two-fold: First, due to the way respondents are selected for these “polls” the samples are rarely representative of the larger populations they portent to represent. For example, many nightly news programs will pose questions and then ask viewers to call-in and register their opinion. Those who do so are usually viewers most interested in the topic (and they only include viewers watching that particular program at that particular time).

Unfortunately, these results are often portrayed as representing the views of the “general public.” Once this happens, the results become “facts” in the public domain. A second problem with “pseudo-polls” is often their purpose.

While reputable surveys are conducted to provide objective information about public attitudes and opinions, “pseudo-polls” oftentimes have a hidden motive or agenda, such as fund-raising or manipulating public attitudes. Increasingly political campaigns have turned to the use of so-called “push polls,” a “telemarketing technique in which telephone calls are used to canvass potential voters, feeding them false or misleading ‘information’ about a candidate under the pretense of taking a poll to see how this ‘information’ affects voter preferences” (AAPOR, 1997). In reality, “push polls” are not “polls” at all, but simply sophisticated forms of telemarketing designed to manipulate public opinion, rather than measure it.

The use of “pseudo-polls” and the representation of data from these “polls” as genuine reflections of public sentiment have been strongly condemned by professional survey research associations.

The NCPP also adds these cautions in their explanations for journalists (emphasis added):

  • …reporting on a survey by a special-interest group is tricky. For example, an environmental group trumpets a poll saying the American people support strong measures to protect the environment. That may be true, but the poll was conducted for a group with definite views. That may have swayed the question wording, the timing of the poll, the group interviewed and the order of the questions. You should carefully examine the poll to be certain that it accurately reflects public opinion and does not simply push a single viewpoint.
  • The key reason that some polls reflect public opinion accurately and other polls are unscientific junk is how people were chosen to be interviewed. In scientific polls, the pollster uses a specific statistical method for picking respondents. In unscientific polls, the person picks himself to participate.
  • No survey ever reaches everyone who should have been interviewed. You ought to know what steps were undertaken to minimize non-response, such as the number of attempts to reach the appropriate respondent and over how many days.
  • …many Internet polls are simply the latest variation on the pseudo-polls that have existed for many years. Whether the effort is a click-on Web survey, a dial-in poll or a mail-in survey, the results should be ignored and not reported. All these pseudo-polls suffer from the same problem: the respondents are self-selected. The individuals choose themselves to take part in the poll – there is no pollster choosing the respondents to be interviewed.
  • Perhaps the best test of any poll question is your reaction to it. On the face of it, does the question seem fair and unbiased? Does it present a balanced set of choices? Would most people be able to answer the question?
  • Sometimes the very order of the questions can have an impact on the results. Often that impact is intentional; sometimes it is not. The impact of order can often be subtle.
  • In recent years, some political campaigns and special-interest groups have used a technique called “push polls” to spread rumors and even outright lies about opponents. These efforts are not polls, but political manipulation trying to hide behind the smokescreen of a public opinion survey.
  • Results of other polls – by a newspaper or television station, a public survey firm or even a candidate’s opponent – should be used to check and contrast poll results you have in hand.

The American Association for Public Opinion Research adds.

A “Push Poll” is not a Legitimate Poll A so-called “push poll” is an insidious form of negative campaigning, disguised as a political poll. “Push polls” are not surveys at all, but rather unethical political telemarketing — telephone calls disguised as research that aim to persuade large numbers of voters and affect election outcomes, rather than measure opinions. This misuse of the survey method exploits the trust people have in research organizations and violates the AAPOR Code of Professional Ethics and Practices.

Similarly, the Marketing Research Association decries push polls:

The practice of “push polling” is abusive to voters, candidates, parties, and organizations. More broadly, each such effort abuses the research profession by giving recipients a misleading and negative view of what research is and how it works – making them much less likely to participate in future survey and opinion research studies. In an era of ever-shrinking response rates, the research profession cannot afford such impugning.

The AAPOR also has this to say about web-based surveys (emphasis added):

Only when a Web-based survey adheres to established principles of scientific data collection can it be characterized as representing the population from which the sample was drawn. But if it uses volunteer respondents, allows respondents to participate in the survey more than once, or excludes portions of the population from participation, it must be characterized as unscientific and is unrepresentative of any population.

Using the correct and representative sample is crucial to creating credible survey results. In his paper, Sampling Methods for Web and E-mail Surveys, Ronald Fricker has many things to say about sampling:

To conduct statistical inference (i.e., to be able to make quantitative statements about the unobserved population statistic), the sample must be drawn in such a fashion that one can both calculate appropriate sample statistics and estimate their standard errors. To do this, as will be discussed in this chapter, one must use a probability-based sampling methodology…

Sampling error arises when a sample of the target population is surveyed. It results from the fact that different samples will generate different survey data. Roughly speaking, assuming a random sample, sampling error is reduced by increasing the sample size…

Non-probability samples, sometimes called convenience samples, occur when either the probability that every unit or respondent included in the sample cannot be determined, or it is left up to each individual to choose to participate in the survey…

Non-probability-based samples often require much less time and effort, and thus usually are less costly to generate, but generally they do not support statistical inference…

If a sample is systematically not representative of the population of inference in some way, then the resulting analysis is biased…

Unrestricted, self-selected surveys are a form of convenience sampling and, as such, the results cannot be generalized to a larger population.

Fricker also provides an example where data from a self-selected poll is misrepresented because the results lack full disclosure as to the nature of the sample (emphasis added):

Misrepresenting convenience samples
A related and significant concern with nonprobability-based sampling methods, both for Internet-based and traditional surveys, is that survey accuracy is characterized only in terms of sampling error and without regard to the potential biases that may be present in the results. While this has always been a concern with all types of survey, the ease and spread of Internet-based surveys seems to have exacerbated the practice. For example, the results of an ‘E-Poll’ were explained as follows:

THE OTHER HALF / E-Poll® Survey of 1,007 respondents was conducted January 16–20, 2003. A representative group of adults 18+ were randomly selected from the E-Poll online panel. At a 95% confidence level, a sample error of +/? 3% is assumed for statistics based on the total sample of 1,007 respondents. Statistics based on sub-samples of the respondents are more sensitive to sampling error. (From a press release posted on the E-Poll website.)

No mention was made in the press release that the ‘E-Poll online panel’ consists of individuals who had chosen to participate in online polls, nor that they were unlikely to be representative of the general population. Rather, it leaves readers with an incorrect impression that the results apply to the general population when, in fact, the margin of error for this particular survey is valid only for adult members of that particular E-Poll online panel.

Any suggestion that this survey was representative of the larger population is misleading. But a lot of internet polls draw similar conclusions and report similar results, yet do not disclose their basic flaw in their sampling technique.

The AAPOR produced an 81-page report on online surveys and polls, noting (emphasis added),

A special AAPOR task force has concluded that there is no theoretical basis to support population inferences or claims of representativeness when using online survey panels recruited by nonprobability methods.

The association also notes that (emphasis added):

Surveys based on self-selected volunteers do not have that sort of known relationship to the target population and are subject to unknown, non-measurable biases. Even if opt-in surveys are based on probability samples drawn from very large pools of volunteers, their results still suffer from unknown biases stemming from the fact that the pool has no knowable relationships with the full target population.

AAPOR considers it harmful to include statements about the theoretical calculation of sampling error in descriptions of such studies, especially when those statements mislead the reader into thinking that the survey is based on a probability sample of the full target population. The harm comes from the inferences that the margin of sampling error estimates can be interpreted like those of probability sample surveys.

For opt-in surveys and polls, therefore, responsible researchers and authors of research reports are obligated to disclose that respondents were not randomly selected from among the total population, but rather from among those who took the initiative or agreed to volunteer to be a respondent.

The NCPP also has a page about the principles of disclosure. These are commitments by members of the NCPP to provide the data, the methodology, population size, sampling method and  more. This information is necessary for any recipient to fully understand the results.

Responsive Management, an American organization “specializing in survey research on natural resource and outdoor recreation issues,” put together a good journal article on internet surveys. On that web page, they define SLOP surveys – “self-selected opinion polls” – and their inherent problems:

Online surveys are largely conducted through non-probability sampling: access to the survey is not controlled, and anyone can participate. The Internet usually features three kinds of non-probability surveys. The first consists of online polls or surveys in which anyone can participate. These are sometimes referred to as self-selected opinion polls, or SLOP surveys, meaning that people who decide to take the survey make up the sample.

The second type is closed population surveys, where a common factor exists among the respondents, but respondents are still self-selected within that population, and access to the survey is not necessarily controlled.

The main article goes on to state (emphasis added):

Recent research conducted by Responsive Management and published in the peer-reviewed journal Human Dimensions of Wildlife shows that online surveys can produce inaccurate, unreliable, and biased data. There are four main reasons for this: sample validity, non-response bias, stakeholder bias, and unverified respondents.

Without going into the details of the article that explains in technical and scientific detail why these reasons matter, let me simply repeat the article’s conclusion (emphasis added):

As a result of these problems, obtaining representative, unbiased, scientifically valid results from online surveys is not possible at this time… This is because, from the outset, there is no such thing as a complete and valid sample — some people are systematically excluded, which is the very definition of bias. In addition, there is no control over who completes the survey or how many times they complete the survey. These biases increase in a stepwise manner, starting out with the basic issue of excluding those without Internet access, then non-response bias, then stakeholder bias, then unverified respondents. As each of these becomes an issue, the data become farther and farther removed from being representative of the population as a whole.

A good definition of bias comes from PRC, a group that does surveys and polls for the health care industry. They note (emphasis added):

Bias means that the results of your survey are skewed towards a certain type of people who aren’t wholly representative of your average (person). Usually, bias occurs when the survey methodology is only compelling to a very specific segment of the surveyed population.

A draft paper on polling methodology, notes this about the selection bias that is common with internet polls and surveys:

Selection bias is an error in how the individual or units are chosen to participate in the survey. It can occur in convenience samples, for example, when those with strong opinions choose to participate in a survey out of proportion to their numbers in the general population. It can also occur if survey participation depends on the respondents having access to particular equipment. For example, Internet-based surveys will miss those without Internet access.

The Council of American Survey Research Organizations suggests online pollsters will use some methods to restrict invalid entries, including:

  • Limit the number of surveys you can take
  • Ask participants to answer screening questions to see if they meet demographic requirements needed for their research sample

None of this is new, of course. As early as 2000, Slate magazine and The Industry Standard website collaborated on an article titled, “Why Online Polls Are Bunk.” The first problem it identified was:

Respondents are not randomly selected. Online polls are a direct descendent of newspaper and magazine straw polls, which were popular in the 19th and early 20th centuries. The print-media straw polls (very different from today’s political straw polls but equally inaccurate) featured clip-out coupons that readers sent in to cast ballots for their preferred candidate. Other organizers of straw polls mailed ballots to people on a list of names.

But the article’s main conclusion was the same problem then that afflicts such polls now:

An online poll–even one that eliminates the problem of multiple voting–doesn’t tell you anything about Internet users as a whole, just about those users who participated in the poll.

Ipsos Reid adds that online polls present other problems for things like calculation the margin of error in results, noting:

  • Not everyone has internet access;
  • Online panels are created through an opt?in process, rather than through a random fashion like a random?digit dial telephone call;
  • Less is known about the profiles of individuals who complete online surveys versus those who do not, or about the likelihood of an online person to complete an online survey or to participate in a survey panel.

Russell Renka, Professor of Political Science at Southeast Missouri State University, wrote a piece called “The Good, the Bad, and the Ugly of Public Opinion Polls,” in which he said (emphasis added):

Rule One in using website polls is to access the original source material. The web is full of polls, and reports about polls. They are not the same thing. A polling or survey site must contain the actual content of the poll, specifically the questions that were asked of participants, the dates during which the poll was done, the number of participants, and the sampling error (see next section below). Legitimate pollsters give you all that and more.

Second, the subjects in the sample must be randomly selected… The web is filled with sites inviting you to participate by posting your opinion. This amounts to creation of samples via self-selection. That trashes the principle of random selection, where everyone in a target population has the same likelihood of being in the sample. A proper medical experiment never permits someone to choose whether to receive a medication rather than the placebo. No; subjects are randomly placed in either the “experimental group” (gets the treatment) or the “control group” (gets the sugar-coated placebo).

If you can call or e-mail yourself into a sample, why would you believe the sample was randomly selected from the population? It won’t be. It consists of persons interested enough or perhaps abusive enough to want their voices heard. Participation feels good, but it is not random selection from the parent population. Next, remember this: any self-selected sample is basically worthless as a source of information about the population beyond itself.** Ugly Polls: This is a special category of bad poll, reserved for so-called pollsters who deliberately use loaded or unfairly worded questions under disguise of doing an objective survey. Some of these are done by amateurs, but the most notorious are produced by political professionals. These include the infamous push polls….

There are also comparable polls composed of subtle question biases that create a preconceived set of responses. These fall into the category of hired gun polls.

A 2006 paper published at Purdue University, titled, “A Critique of Internet Polls as Symbolic Representation and Pseudo-Events” made many salient points about the cynical use of nonscientific polls by the media as audience manipulators (emphasis added):

The need for more novel and continuous news has transformed the nature of the news business from merely reporting the goings-on in the world to that of creating newsworthy events. News is no longer something that happens, news is what the media make happen. Boorstin calls the phenomenon of news creation the “pseudo-event”.***

Nonscientific opinion polls are entertainment tools… More than how Internet opinion polls make individuals feel, however, is the fact that polls also serve a framing function whereby ‘‘news’’ stories are given legitimacy by their relationship to polls—even when the polls are not scientific. In the long run, negative consequences from the evolution of the opinion poll into just another pseudo-event are likely. The merging of news and entertainment, in combination with the media’s emphasis of image over content may contribute to what Jack Fuller, President of Tribune Publishing Company, calls a crisis of inauthenticity.

Nonscientific Internet polls illustrate the fundamental transformation that has taken place in how news is constructed. What the 24-hour broadcast news Web sites do is to blur the lines between what is news and what is entertainment. The role played by the opinion poll in contemporary news coverage is similar (but with an appearance of legitimacy) to that played by streakers in the 1970s, psychics in the 1980s, and stories about UFOs in the 1990s—they were entertaining. Scholars, journalists, teachers, and, by extension, citizens should be critical of the role that these nonscientific opinion polls play in political dialogue.

Not only do Internet polls ‘‘create news,’’ they also create the illusion that uninformed public ‘‘opinion’’ has a legitimate role in policy making. … as long as ‘‘opinion polls’’ are structured so that they allow people to ‘‘express their opinions’’ rather than ‘‘measuring opinion’’ or public sentiment on social, political, policy, and other issues, the Internet poll will continue to be nothing more than a tool used by media conglomerates to increase advertising revenue and to persuade people to visit their Web sites.

Ipsos, a major polling company,  wrote an open letter to Canadian journalists about bad or uncritical reporting of polling, saying (emphasis added),

…all polls are NOT created equally. And, in spite of what you may assume, pollsters are never held to account for their indiscretions, incompetence and mistakes (there is no “polling police”). Some marginal pollsters count on your ignorance and hunger to make the news to peddle an inferior product. Others are using your coverage to “prove” that their untried methodology is the way forward for market research in Canada. Instead of being their own biggest sceptics (which is what our training tells us to be), they’ve become hucksters selling methodological snake oil. Remember, the term “pollster” is derived from the term “huckster”.

Journalists are no mere dupes in this process. We’ve also seen a disturbing trend of late in which questionable polls find their way into an outlet’s coverage because they appear to match an editorial line, or present a counter-intuitive perspective. After all, if a poll is wrong it’s easy to throw the pollster under the bus and walk away with clean hands.

All of this MUST stop. We are distorting our democracy, confusing voters, and destroying what should be a source of truth in election campaigns – the unbiased, truly scientific public opinion poll.

The Internet Journalists’ network identifies four mistakes journalists make when reporting on polls, including:

Trusting “voodoo polls”

Open-access polls were dubbed “voodoo polls” because of their scientific inaccuracy. Typically offered online or by phone, anyone can vote in them and there’s no way to tell who they are or whether they’ve voted more than once. Journalists should report only on polls that use random sampling and quota sampling to ensure that people polled are representative of the population, he says.

Wikipedia defines voodoo polls as (emphasis added):

A voodoo poll (or pseudo-poll) is a pejorative description of an opinion poll with no statistical or scientific reliability, which is therefore not a good indicator of opinion on an issue. A voodoo poll will tend to involve self-selection, will be unrepresentative of the target population, and is often very easy to rig by those with a partisan interest in the results of the poll.

And the Journalists’ Resource site warns (emphasis added),

In all scientific polls, respondents are chosen at random. Surveys with self-selected respondents — for example, people interviewed on the street or who just happen to participate in a web-based survey — are intrinsically unscientific.

The form, wording and order of questions can significantly affect poll results. With some complex issues — the early debate over human embryonic stem cells, for example — pollsters have erroneously measured “nonopinions” or “nonattitudes,” as respondents had not thought through the issue and voiced an opinion only because a polling organization contacted them. Poll results in this case fluctuated wildly depending on the wording of the question.

Another paper on the Journalists’ Resource website notes that timing and order of presentation matter to the response, so how a survey is constructed and worded is of great importance:

“…individuals typically give greater weight to the more immediate cues contained in the most recent message. Democratic competition thus may reduce or eliminate framing effects only when people are presented with opposing frames at the same time.”

Overall, the study demonstrates that “most individuals were shown to be vulnerable to the vagaries of timing and the framing of communications.”

A paper for the International Statistical Institute made the point very clearly about the amateur nature of Web surveys:

Unfortunately, the term “Web survey” is implicitly associated with non-professional, ad-hoc and self-selected forms on the Web, because they are the most numerous and most noticeable types of Web survey data collection. Today, the notion of “Web survey” thus already contains a flavour of low quality, what is not the case with “telephone surveys”, where we automatically assume a RDD or similar professional undertaking and not some call-in telephone survey.

I also recommend you read the information on Fallacy Watch about how to read a poll, which clearly describes the issues and concerns about valid vs unscientific polls (emphasis added):

So, the first question that you should ask of a poll report you read is: “Was the sample chosen scientifically?” If the poll is a scientific one, then an effort has been made to either choose the sample randomly from the population, or to weight it in order to make it representative of the population.

Reputable polling organizations always use scientific sampling. However, many polls are unscientific, such as most online polls you take using a computer, telephone surveys in which you must call a certain number, or mail-in questionnaires in magazines or sent to you by charities. Such surveys suffer from the fault that the sample is self-selected, that is, you decide whether you wish to participate. Self-selected samples are not likely to be representative of the population for various reasons:

  • The readers of a particular magazine or the contributors to a specific charity are likely to differ from the rest of the population in other respects.
  • Those who take the time and trouble to volunteer for a poll are more motivated than the average person, and probably care more about the survey subject.
  • Many such polls allow individuals to vote more than once, thus allowing the results to be skewed by people who stuff the ballot box.

Even the Parliament of Canada had comments to make about public polling, saying (emphasis added):

Polls can be deliberately misused and misinterpreted if the technical information accompanying them is too sketchy to permit assessment of the validity of the results. As in many other democratic societies, the treatment of polls by the media therefore remains an important issue in Canada, with many calling for increased efforts on the part of the polling industry to ensure that opinion surveys are reported with the necessary background information. The dilemma for the media, in turn, is how to provide good polling information without overloading their audiences with less interesting technical data. Some have argued that the media are not trained in the interpretation of polls.

The Parliamentary report also suggested (emphasis added):

  • …any news organization that sponsors, purchases or acquires any opinion poll, and is the first to publish or announce its results in Canada during an election campaign, be required to include technical information on the methodology used in the poll;
  • any such news organization be required to make available to any person within 24 hours of publication and for the cost of duplication, a full report on the results of the questions published, including technical information and the results on which the publication or announcement is based;
  • reports in the news media of polls done privately or by other news organizations, when presented for the first time in Canada in a manner similar to formal reports of media polls, be subject to the same disclosure rules concerning technical information on methodology;

That reports ends,

Some commentators suspect that pollsters have become a dangerous new breed of political advisor, emphasizing numbers instead of issues and usurping the traditional informational role of political parties…. Good governing may have less to do with polling than with ability to lead.

How does an online poll or survey prevent a respondent from voting more than once? Or confirm that the respondent is valid (i.e.  not a child and isn’t the pollster or one of the pollster’s own group)? Or is – in the case of a municipal survey – actually a resident of that municipality and not an outsider? Or is aware of the issues, concerns and events being raised in the survey? How does the pollster assure that the survey represents a random sample of the community, and not merely supporters of the particular advocacy group posting the survey?

And then – if the results were reported, did the media clarify these issues with the pollsters, consult the methodology, confirm the margin of error, and determine that there was both a reasonable random sampling of the community, and that no opportunity for hoax, fraud or poll crashing could skew the results? Did the media raise any questions about the way the questions were worded and presented? Or were the results simply published as presented by the advocacy group? ****

Some valid survey sites, like Yougov, make respondents register and provide demographic data, in order to participate in polls. Respondents must sign in with their email address to participate. That way the site controls who responds to specific polls, and has a demographic baseline to measure where the response sits in the demographic and data analyses.

YouGov seeks to produce results that are not just demographically representative but also attitudinally representative. As well as weighting our raw data to ensure that our figures conform to the profile of the nation by age, gender, social class and region, we also weight our data by newspaper readership, and often by past vote.

Polls that depend on cookies or IP addresses for respondent verification – used by most quick and easy poll apps and sites – are open to abuse and hoaxing through simple and easily practiced methods. Anyone with a modicum of technical savvy can do this. A handful of pranksters can easily and significantly skew the outcome of any poll or survey with nothing more than a web browser and some free time.

Poll crashing is not simply a prank, it has become big business for some companies, and is de rigeur for many political parties and advocacy groups who want to create or manipulate the political platform for their message. The PBS story on poll crashers notes one person saying:

“First of all, and this is the most important point, it’s that these are not poll… Polling is a science, and polls work, and they work well. These are web widgets; it’s no more a poll than what someone put up on Flickr is the Mona Lisa. And you put them on your blog because they’re fun. Even CNN polls going back to the beginning of the Internet — the first online polls were these CNN polls. That’s how it all started really — even they put it up to entertain their readers, to entertain the masses…” Though he agreed that the polls are a form of community engagement, he rejected the notion that they could somehow accurately measure how much enthusiasm or passion exists online about a particular issue.

Any online survey is subject to all sorts of dodginess thanks to its restricted or incorrect sampling, the great possibility for abuse and hoaxing, crashing and plumping; the potentially misleading layout, wording, and question order established by non-professionals – sometimes to target specific responses and to the erroneous conclusions drawn without full disclosure.

There is great potential for future use of online survey techniques when these issues have been resolved, but until and unless they are, online surveys must be treated by readers and consumers with caution and skepticism.

I stand by my original comment that most online polls are unscientific, biased and at best for entertainment purposes only, up there with your daily horoscope for credibility. *****

~~~~~

* I include in this category of non-scientific poll for entertainment purposes only, my own sidebar poll on this blog. Mine is meant to stir discussion, not present any realistic results. In fact, the results don’t matter if it engages people to discuss the issue.

** Self-selection poses great problems for statisticians and data analysts in trying to determine how to weight those responses to make them even marginally valid. This paper describes in detail one method that involves online and offline polling to create a base for weighting. Without a method like this, a self-selected poll is simply a survey of the people whom you want to tell you what you wanted to hear. Basically, self-selected polls tell you only what the respondents in the selected group believe, and should not be considered to have any relevance to the larger population.

*** For more on the concept of pseudo events in media and popular culture, see Daniel Boortin’s book, The Image:  A Guide to Pseudo-Events in America. A review of that concept is here. It notes, “Public life, he said, was filled with “pseudo-events” — staged and scripted events that were a kind of counterfeit version of actual happenings. Just as there were now counterfeit events, so, he said, there were also counterfeit people – celebrities – whose identities were being staged and scripted, to create illusions that often had no relationship to any underlying reality.”

Blogger Joe Carter also adds media polls and surveys to Boortsin’s pseudo event list (emphasis added):

Opinion polls are a prime example of what sociologist Daniel Boorstin called “pseudo-events.” A pseudo-event, according to Boorstin, is a happening that possesses the following characteristics:

  • It is not spontaneous, but comes about because someone has planned, planted, or incited it. Typically, it is not a train wreck or an earthquake, but an interview.
  • It is planted primarily (not always exclusively) for the immediate purpose of being reported or reproduced. Therefore, its occurrence is arranged for the convenience of the reporting or reproducing media. Its success is measured by how widely it is reported. Time relations in it are commonly fictitious or factitious; the announcement is given out in advance “for future release” and written as if the event had occurred in the past. The question, “Is it real?” is less important than, “Is it newsworthy?”
  • Its relation to the underlying reality of the situation is ambiguous. Its interest arises largely from this very ambiguity. Concerning a pseudo-event the question, “What does it mean?” has a new dimension. While the news interest in a train wreck is in what happened and in the real consequences, the interest in an interview is always, in a sense, in whether it really happened and in what might have been the motives. Did the statement really mean what it said? Without some of this ambiguity a pseudo-event cannot be very interesting.
  • Usually it is intended to be a self-fulfilling prophecy. The hotel’s thirtieth-anniversary celebration, by saying that the hotel is a distinguished institution, actually makes it one.

News agencies often sponsor polls so that they can report on the very poll they created. Instead of reporting the news, they create a pseudo-event to report on. Ironically, this information is processed as “news” and helps shape a person’s judgment on the issue being polled.

It’s not unlike media creating pseudo-celebrities out of talentless, meaningless, witless people like the Kardashians or Paris Hilton by incessant reporting on their every word, activity, clothing, sexual relationship, and shopping spree as if those things actually mattered to anyone else, or were worth the waste of time to describe. Pseudo events are media onanism.

**** An example of a problematic and poorly worded question is:

“How would you describe your position on Town Council passing a resolution in support for the possible location of a casino gaming facility in town?”

This is a leading question. It suggests the council in question will pass a resolution to support a casino. It also suggests a possible location has already been decided (“the” is a definite article). It should be presented as a neutral question without any suggestion that anyone has approved either a casino or a chosen location.

Another question is: are respondents being asked if they support a casino or just a location? It might seems like the latter to some respondents (which suggests the decision to support a casino was already made!).

Push polls often use misleading or leading questions like this, often framed by incomplete information, innuendo or simply with lies.

***** I can only hope all media ask the questions noted above before printing any results from any online poll or survey and qualify the results accordingly for their readers and listeners, including all necessary disclaimers, not simply reprint or accept them at face value. That would be as great a disservice to their community as publishing stories about Kim Kardashian’s cat or Justin Bieber’s monkey as “news.”

Print Friendly, PDF & Email

5 Comments

  1. Pingback: No Data Are Better Than Bad Data | Scripturient: Blog & Commentary

  2. Pingback: Al-Jazeera poll shows overwhelming support for ISIS - Page 5 - Political Wrinkles

  3. Pingback: Al-Jazeera poll shows overwhelming support for ISIS - Page 6 - Political Wrinkles

  4. Pingback: Al-Jazeera poll shows overwhelming support for ISIS - Page 7 - Political Wrinkles

  5. Pingback: Opt-in Polls: Bad Data, Bad Science, and Bad Actors – Scripturient

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to Top