The Presentation of a Web Survey, Nonresponse and Measurement Error among Members of Web Panel
Roger Tourangeau, Robert M. Groves, Courtney Kennedy, Ting Yan
This study tests the idea that features of the presentation of a survey to potential respondents can affect nonresponse error, measurement error, and the relation between the two. A few weeks after they had completed one web survey, we asked members of two opt-in web panels to take part in a second web study and systematically varied our description of that survey. The description varied both the purported topic and sponsor of the second survey. The members of the sample were not aware of the connection between the two surveys. We found little evidence that the survey presentation affected response rates to the second survey or the make-up of the sample on the variables we had collected in the initial questionnaire. There were indications, however, that some answers to the questions in the second survey were affected by the framing of the survey request. For example, respondents were less favorable to gun control than they had been in the initial survey when we described the sponsor of the second survey as the “The National Coalition of Gun Owners” rather than “The National Coalition for Victims of Gun Violence” or “The National Center for the Study of Crime.” We argue that the description of the survey can affect how respondents interpret the questions and what they see as a useful response. We also found evidence that attitudes toward the survey sponsor and interest in the topic were related to carelessness in completing the questions. Respondents were, for example, less likely to give the same answer to every item in a grid if they had favorable attitudes toward the sponsor than if their attitudes toward the sponsor were unfavorable.
Survey presentation, nonresponse rates, nonresponse bias, measurement error