Table 4: Evidence-based classification on the impact of the various indications as proposed in the paper.

GUIDELINE (article) TYPE OF QUESTIONNAIRE EFFICACY (article)
Use “other” response options to enhance response rates in self administered questionnaires, or use them during questionnaire testing, in order to identify new issues or to elaborate on closed response formats. [10] Electronic Questionnaire
E-mail Questionnaire
Web Questionnaire
Responses like “Don’t know/not sure” and “would rather not say” aren’t influenced by the presence of a progress indicator: across all 69 non demographic items in the survey, the mean number of such responses is virtually identical in both versions (7.91 for the progress indicator version and 7.92 for the version with no progress indicator). [9]
Use a personalize cover letter to increases survey response rate. [21] Electronic Questionnaire
E-mail Questionnaire
Web Questionnaire
The study shows how personalization of the email invitations increase the response rate of 8.6 and 7.8 percentage points in the two experiments conducted. [9]

Using the salutation “Dear Forename” increased the odds of a response by almost 40% compared to using an impersonal salutation like “Dear Student”. [22]

Impact of personalized salutation when inviting existing panel members to exit the panel : using a non personalized salutation increases the chances of a person leaving the panel by 1.4 times. [23]

Highest response is obtained when personalized invitation came from a high power source (53.4) and the lowest when an impersonal invitation came from a neutral power source (40.1%).
In the experiment a neutral power condition is composed by: From (name), (Strategy, Planning and Partnerships), The Open University; while a high power condition is composed by: From Professor (name), Pro-vice chancellor (Strategy, Planning and Partnerships), The Open University. [23]

Effect of personalization invitation on response rate:
     • response rate = 53.51% among the sample assigned to the non personalization condition “Dear student”;
     • response rate = 92.91% among the sample assigned to the personalization condition “Dear first name, last name”. [23]
Provide an estimate of the time required to complete the questionnaire. [24] Electronic Questionnaire
E-mail Questionnaire
Web Questionnaire
Survey length statement given in the email invitation is unable to significantly influence the response rates:
     • response rate = 59.10% among the sample assigned to the specific survey length condition;
     • response rate = 57.32% among the sample assigned to the vague survey length condition. [9]
Use reminder letters and telephone contact, in order to increase response rates. [24] E-mail Questionnaire The study describes three email surveys conducted among public health physician in the UK. Researchers sent the questionnaire to the potentially respondents, than sent reminders at three and seven weeks and contacted all non responders by telephone at eight weeks. Results show that the response rate of the third survey increases from 81% to 88% after the telephone reminder. [9]

The paper describes the design of a Web survey to determine the use of the Internet in clinical practice by dental professionals. After the first email, three additional messages were sent to non respondents during the following three weeks. In the second and in the third follow up, the survey was directly included in the email message. In addition, while the initial messages and the first follow up messages were sent from a generic email account, the third and the second follow up were sent from the list owner’s account directly, with a personal request for a response to the survey. The response rate for surveys entered via the Web was 32.9% after the initial mailing, 50.2% after the first follow up, 57.1% after the second follow up and 64.4% after the third follow up. [25]

The aim of the study is to determine the effect of a sequential mixed mode survey on response rate. Researchers distributed questionnaires among a sample of college bound high school students.
In the first stage respondents were asked to respond via the Web, all respondents obtained a reminder by mail (without hard copy), also a random subset of them received phone reminders.
In the second stage non-respondents were mailed a hard copy of the questionnaire in addition to the Web survey option, all respondents obtained two reminders by mail (a postcard, and a letter with a hard copy), also a random subset of them received phone incentives in form of 3$ McDonald’s gift certificates.
Results show that the phone reminders positively affects response rate: the response rate among people who received no phone reminder was 17.8%, while the response rate among people who received a phone reminder was 30.3%. Also, results show that the use of incentives positively affects response rate: the response rate on or after day 35 with incentives is 31.5%, and without incentives 5.6%. [26]
The survey topic must be relevant to the target group. [27] E-mail Questionnaire
Web Questionnaire
Respondents who are more interested in the topic are more likely to provide a response to the open questions in comparison to respondents who are less interested (86.2% vs. 62% for the first question; 88.3% vs. 60.1% for the second question). [5]

Respondents who are more interested in the topic are more likely to provide the follow up probes in comparison to respondents who are less interested (33.5% vs. 17.1% responded to the probe after the first questions; 16.1% vs. 4.7% responded to the probe after the second questions). [28]

Respondents who are more interested in the topic are more likely to provide significantly more themes in comparison to respondents who are less interested (2.7% vs. 2.1% for the first question; 2.0% vs.1.5% for the second question). [28]

Respondents who are more interested in the topic are more likely to elaborate significantly more often in comparison to respondents who are less interested (32.1% vs. 17.9% for the first question; 21.6% vs.13.9% for the second question). [28]

Respondents who are more interested in the topic are more likely to provide significantly more words most of the time in comparison to respondents who are less interested (11.8% vs. 8.5% for the first question; 11.1% vs.8.2% for the second question). [28]
Use a semiautomatic login procedure, including an access code, in order to not decrease response rates and increase data quality. [28] Electronic Questionnaire
Web Questionnaire
Automatic login significantly increases response rates by nearly 5 percentage points over manual login, perhaps because it is less burdensome to sample members. [5]

Study’s data show that using a manual login procedures:
     • does not decrease response rates (45.2% for automatic login condition vs. 51.6% for manual login condition);
     • increase the overall degree of data quality: almost 5% more completely filled out questionnaires in the manual condition compared to those in the automatic condition [29]
It is better to create a plain questionnaire than a fancy version of the same questionnaire, because a plain version increases response rate, the completeness and reduces completion time. [30] E-mail Questionnaire
Web Questionnaire
The study presents two experimental questionnaire: the “fancy” questionnaire, and the “plain” questionnaire, both with the same wording and question order.
     • 93% of respondents assigned to the plain questionnaires submitted their questionnaire, while 82% of those assigned to the fancy questionnaire submitted the questionnaire after completing it.
     • The plain questionnaire obtained a higher response rate (41.1%) compared to the fancy design (36.29%). [5]
Use a mixed-mode strategy, (electronic and pen-and-paper questionnaire), to reach respondents without access to the Internet. [31] E-mail Questionnaire
Web Questionnaire
The aim of the study is to determine the effect of response mode (online mode, simultaneous mixed mode or sequential mixed mode) on response rate among a sample of doctors undertaking clinical practice. Study’s design considers three different response mode:
     • Online mode: respondents received a personal invitation letter with login details and option to request paper copy; respondents were sent a reminder letter around three weeks later that again included login details.
     • Simultaneous mixed mode respondents received a personal invitation letter with paper copy, reply paid envelope and option to complete online the questionnaire; respondents were sent a reminder letter around three weeks later that contained login details and option to request paper copy.
     • Sequential mixed mode: respondents received a personal invitation letter login details and option to request paper copy; respondents were sent a reminder letter around three weeks later that contained paper copy, reply paid envelope and option to complete online.
Results show that mixed mode strategies had a higher response rate: in fact, while the online mode had a response rate 12.95%, the simultaneous mixed mode had a response rate 19.7%, and the sequential mixed mode had a response rate 20.7%. Thus, the study demonstrates that the difference in response rate between the sequential and simultaneous modes is not statistically significant. [5]
Use a progress indicator in order to reduce respondent loss. [32]

Insert words and/or symbols that accurately communicate progress towards completion, in order to avoid premature termination. [5]
Web Questionnaire The presence of progress indicator could interfere on the mean time needed to complete the surveys. The mean time in minutes to complete the survey was significantly higher (p=.01) for the progress indicator version (22.7 minutes) than for the version with no progress indicator (19.8 minutes). However, the presence of a progress indicator does not influence completeness rate, despite time requested: 89.9 percent of who received a progress indicator completed the survey, compared with 86.4 percent of who did not receive a progress indicator. This difference does not reach statistical significance (p=.13). [16]
Progress indicator could not significantly decrease the break-off rate: in the progress indicator group the break off rate is 11.27%, while it is 12.55% in the group that was shown nothing. [21]

The aim of the study is to determine how respondent-friendly design principles (in terms of: structure of the first question, use of graphical symbols conveying point of completion, and use of double banking for multiple response questions) influence web questionnaire response rates and data quality.
     • Questionnaire 1: has a fully visible first question and double-banking answers choices, but does not have graphical symbols or words that convey a sense of where the respondent is in the completion progress.
     • Questionnaire 2: has a fully visible first question and graphical symbols or words that convey a sense of where the respondent is in the completion progress, but does not have double-banking answers choices.
     • Questionnaire 3: has double-banking answers choices and graphical symbols or words that convey a sense of where the respondent is in the completion progress, but does not have a fully visible first question.
     • Questionnaire 4: does not have a fully visible first question, neither double-banking answers choices, neither graphical symbols or words that convey a sense of where the respondent is in the completion progress.
Study’s results shows that having a POC indicator has little or no effect on dropout rates: 13% of respondents without POC indicator did not complete the questionnaire, while 14% of respondents with POC indicator completed the questionnaire. [24]
Drop down boxes should be used only when answering process is simplified and should be identified with click here. [33]

In compares to entry boxes, the use of radio buttons may decrease the likelihood of missing data. [16]
Web Questionnaire In study’s second experiment, radio-buttons produce significantly lower drop-out rates than drop-down boxes: among students’ sample with fast internet connection the percentage of drop-out rate is 96.6% for respondents with radio-buttons, while drop-out rate is 86.4% for respondents with drop-down boxes.
Study’s results indicate that radio-buttons act as an initial barrier filters to respondents with slower internet connections. But among those who are connected to the internet with sufficiently fast connections and are thus less influenced by downloading times, radio-buttons produce less dropout the drop-down boxes.
As radio-buttons require more time to download, but are easier to use than drop-down boxes, the choice between the two response formats is not straightforward and must be guided by other concerns, such as sample composition. [9]
When create a voluntary survey, be careful to keep the questionnaire short. [34]

Be careful to the length of the questionnaire, including a small number of short, well focused, understandable questions, avoiding to require excessive concentration. [16]

Keep survey length short, in order to avoid that users speed through the survey to completion. [17]
Web Questionnaire
App Questionnaire
App Questionnaire
The aim of the study is to determine how questionnaire’s length and difficult influence dropout rate. The experimental design consisted in five different type of questionnaire distributed among the sample:
     • One questionnaire of 12 questions, which did not vary by difficult
     • Two easy questionnaires, one with 24 questions and one with 36 questions
     • Two difficult questionnaires, one with 24 questions and one with 36 questions
Results show that the addition of 12 questions (6 minutes) to the easier version did not impact dropout rate significantly. However, the addition of 12 harder questions to an already difficult survey caused 6% of additional panelist to drop out. This suggests that the questionnaire’s difficult rather than questionnaire’s length impacts dropout rate. [19]

The study examined respondents’ dropout rates among 1963 undergraduates participating in one of six survey based studies administered online. Studies ranged in length from 243 to 535 survey items, and measured constructs related to personality and emotion.
     • Study 1 with 243 items, obtained a response rate of 94.4%
     • Study 2 with 336 items, obtained a response rate of 85.4%
     • Study 3 with 365 items, obtained a response rate of 78.9%
     • Study 4 with 343 items, obtained a response rate of 75.4%
     • Study 5 with 535 items, obtained a response rate of 70.9%
     • Study 6 with 152 items, obtained a response rate of 71.6%

Results’ analysis shows that among the total sample, 6% of participants dropped out after providing consent, and a total of 10.1% of participants discontinued within the first dozen responses. After the initial item set, dropout decelerated significantly, with a cumulative 13% of participants having dropped out after 100 responses, and 20.7% after 500 responses. [35]

The aim of the study is to determine how the timing of follow-up, different incentives, length, and presentation of the questionnaire, influence the response rate and response quality in an online experimental setting.
     • Timing of the follow-up: a group received the reminder after one week, while the other group received the follow-up after two weeks;
     • Types of incentive: vouchers, donations or lottery;
     • Length of the questionnaire: long version (30-45 minutes) vs. short version (15-30 minutes);
     • Presentation of the questionnaire: textual vs. visual presentation.


Response rates for different format and design parameters
Timing of
the follow up Type of
Incentive Length of the
questionnaire Presentation of
the questionnaire
Early 21.2%
Late 19.5%
Voucher 22.8%
Donation 16.6%
Lottery 22.8% Long 17.1%
Short 24.5% Textual 21.9%
Visual 19.0%

The study shows there isn’t significant difference in response rate between the follow-up at 1 week (21.2%) and the follow up at 2 weeks (19.5). On the other hand, monetary interests (vouchers and lotteries) increase response rates (22.8%), in comparison to donations (16.6%). The shorter version of the questionnaire (24.5%) has a higher response rate than the long version (17.1%), while the response rate is significantly lower for the visual (19.0%) than for the textual version of the questionnaire (21.9), although this difference is relatively small. [36]
Design the questionnaire using a conventional formula similar to those normally used on paper self administered questionnaires. [37] Web Questionnaire In the Intervention Phase of the study, respondents were assigned to one of three questionnaire’s conditions, in order to determine how the use of prompts for speeding and non differentiation in grid questions have different impacts on respondent behavior.

Experimental Conditions Behavior Triggering Intervention Intervention Message Displayed in
a Pop-up Window
No prompt (control)
Non-differentation
(ND) prompt: respondents prompted for non-differentiation, which includes straightlining (same responses for all the statements in a grid) and near straightlining (same responses for all the statements in a grid, except for one) When respondents straightline (same responses for all the items in a grid) or near-straightline (same responses except for one item) “You seem to have given very
similar ratings for the different
items in this question. Please think about each item on its own and be sure to give it enough thought so that your answer is informative and accurate. Do you want to go back and reconsider your answers?” (Yes/No)
Speeding (SP) prompt: respondents prompted for speeding (response time<300 msec per word) When the total response time to the grid question is less than 300 msec per word (i.e., 0.3 sec multiplied by the number of words in the question) “You seem to have answered very quickly. Please be sure you have given all the items in the question sufficient thought so that your answer is informative and accurate. Do you want to go back and reconsider your answers?”
(Yes/No)


Results shows that both prompts reduce speeding and non differentiation behavior in grid questions:
     • Compared to control condition (98%), the speeding behavior is reduced by speeding prompt (96.8%)
     • Compared to control condition (35.5%), the speeding behavior is reduced by non differentiation prompt (30.2%)
     • Compared to control condition (47.1%), the near straightlining behavior is reduced by speeding prompt (49.8%)
     • Compared to control condition (62.6%), the near straightlining behavior is reduced by non differentiation prompt (55.3%)
     • Compared to control condition (48.1%), the straightlining behavior is reduced by speeding prompt (43.9%)
     • Compared to control condition (38%), the straightlining behavior is reduced by non differentiation prompt (23.4%)

As we can see, although the grid design is widely used in Web questionnaires to present multiple items with the same response option, it does not’ improves answers’ quality. Compared to traditional Web surveys that are essentially online versions of paper questionnaires, an interactive Web survey interface may help respondents to give more truthful and correct answers. [16]
Double or triple banking of answer choices should be avoided. [38] E-mail Questionnaire
Web Questionnaire
The aim of the study is to determine how respondent-friendly design principles (in terms of: structure of the first question, use of graphical symbols conveying point of completion, and use of double banking for multiple response questions) influence web questionnaire response rates and data quality.
     • Questionnaire 1: has a fully visible first question and double-banking answers choices, but does not have graphical symbols or words that convey a sense of where the respondent is in the completion progress.
     • Questionnaire 2: has a fully visible first question and graphical symbols or words that convey a sense of where the respondent is in the completion progress, but does not have double-banking answers choices.
     • Questionnaire 3: has double-banking answers choices and graphical symbols or words that convey a sense of where the respondent is in the completion progress, but does not have a fully visible first question.
     • Questionnaire 4: does not have a fully visible first question, neither double-banking answers choices, neither graphical symbols or words that convey a sense of where the respondent is in the completion progress.

Study’s results show that the double-banking treatments resulted in only 6% more checked items per respondent than for non-banked treatments. [16]
Insert a simple progress indicator to show users how far they are through the sequence, especially for long quality of life questionnaires. [33] Electronic Questionnaire The progress indicator show to significantly decrease the percentage of respondents saying that the survey took too long (6.25% vs. 13.48%). [11]
Use hyper text, colors, interaction strategies and other Web communication capabilities to create more attraction. [24] E-mail Questionnaire
Web Questionnaire
The study compare web survey ratings based on graphical and traditional standard web survey scales, including usability, engagement and enjoyment of taking part in the survey. Study’s results reports:
     • no significant disadvantages in completion time between graphical survey (10.8 minutes) and standard survey (9.7 minutes);
     • significant advantages in respondents’ perception of questionnaire’s usability in relation to the type of the survey (graphical v. standard): 86% of graphical sample find the question style to be enjoyable, compared to 72% of standard sample find the question style to be enjoyable;
     • significant advantages in respondent’s engagement in relation to the type of the survey (graphical vs. standard): the final question on the surveys asked respondents to include any open text comments they had on the survey itself, thus 33% respondents from the graphical survey added a comment, compared to only 20% from the standard survey. [6]
Adopt clear, visible, respondent-friendly privacy policies. [39] Web Questionnaire Paper’s study 2 reports an experimental manipulation of privacy and trust levels of the survey, presenting four different conditions:
     • Strong privacy condition: the first page of the web survey contained a strong privacy policy information
     • Weak privacy condition: the first page of the web survey contained a weak privacy policy information
     • High trust condition: institutional logo, no spelling mistakes and no advertisements
     • Low trust condition: advertisements for gambling and money transfer services and spelling and coding mistakes


                  Privacy
High Low
Trust High
78.3% 82.1%
Low
85.1% 60.4%

Results’ study show that the impact of low privacy on self disclosure level is moderated by trust, such that high trust compensate for low privacy when examining response rates. The percentage of self disclosure is substantially reduced only when a weak privacy policy is combined with cues designed to reduce trust (60%). [1]
Think carefully to which type of page design choose: one page design reflects the conventional formula where there is no interaction with the respondent during questionnaire completion; on the other hand multiple-page design increases the controls on non response questions, and also increases the risk of abandoning the questionnaire. [40] Web Questionnaire
App Questionnaire
In the study, questionnaire’s completion time for the multiple-page design is 30% longer than for the one-page questionnaire. [2]

The multiple-item-per-screen version took significantly less time (p=.01) to complete the 16 items than the single-item-per-screen version. [2]
The initial question must be fully visible on the first screen of the questionnaire, interest-getting, easily comprehended and answered by all respondents. Do not use drop-down box or require scrolling in order to see the entire first question. [21]

At the first question include an item that is likely to be interesting, easy to answer, and fully visible on the first screen of the questionnaire. [20]
Web Questionnaire The aim of the study is to determine how respondent-friendly design principles (in terms of: structure of the first question, use of graphical symbols conveying point of completion, and use of double banking for multiple response questions) influence web questionnaire response rates and data quality.
     • Questionnaire 1: has a fully visible first question and double-banking answers choices, but does not have graphical symbols or words that convey a sense of where the respondent is in the completion progress.
     • Questionnaire 2: has a fully visible first question and graphical symbols or words that convey a sense of where the respondent is in the completion progress, but does not have double-banking answers choices.
     • Questionnaire 3: has double-banking answers choices and graphical symbols or words that convey a sense of where the respondent is in the completion progress, but does not have a fully visible first question.
     • Questionnaire 4: does not have a fully visible first question, neither double-banking answers choices, neither graphical symbols or words that convey a sense of where the respondent is in the completion progress.

Study’s results show that having the full first question visible has little or no effect on whether or not respondents decide to complete the question or continue with the survey. [14]
Choose a layout that don’t influence users’ opinions and answers. [33] Web Questionnaire The study tests the effects of repeated exposure to a certain image in a web survey on students’ housing conditions, showing four different type of image:
     • Image of upscale housing condition
     • Image of average housing condition
     • Image of deprived housing condition
     • No imagine

The image was always located on the header of the questionnaire. The respective header appeared on top of each questionnaire page from the beginning and stayed the same throughout the whole survey. Questionnaire asked respondents to indicate personal satisfaction with current housing condition and rating of the city.
     • Interaction between type of image exposed and student’s satisfaction with current house situation: there is no significant evidence that the condition upscale and deprived produce contrast effects on satisfaction with the current housing condition (means of satisfaction = 4.31 for deprived image, and 4.31 for upscale image).
     • Interaction between type of image exposed and student’s rating of the city: there is no significant evidence that the condition upscale and deprived produce contrast effects on student’s city ratings (means of city ratings = 2.16 for deprived image, and 2.16 for upscale image).

The study provides evidence on the phenomenon “banner blindness”, showing that header image is not overlooked completely by respondents, despite satisfaction and city ratings levels. In fact, in the open space for comments at the end of the survey, several respondents in the “deprived” and “upscale” conditions complained that the pictures were ridiculously bleak or too glamorous, respectively, and by no means representative of students’ housing conditions. [14]

The images did indeed affect the self-ratings of health, producing lower ratings on average for the respondents who got the picture of the healthy woman (mean = 3.10) than those who got the picture of the sick woman (mean = 3.25). [41]

When the imagine is inserted in the introduction or in the question area, we can find the hypothesized contrast effect, with respondents rating their health lower when exposed to the picture of the healthy woman (respectively 2.93 and 3.05), and higher when exposed to the picture of the sick woman (respectively 3.29 and 3.30). [42]

When the imagine is inserted in the header of the questionnaire the difference between the two versions is not significant (mean 3.14 for sick woman, mean 3.29 for fit woman). One explanation for the lack of effect of the picture content in the header condition may be “banner blindness: when the image appears in the header area, it may either be ignored altogether (not seen) or viewed as irrelevant to the task. [42]
Try to include only one question for page; if bandwidth is not enough, design questionnaire flow so that the users’ attention is not diverted by other cognitive processes. [42] The aim of the study is to determine the effect of questionnaire’s presentation (one web page vs. multiple web pages) on respondents’ preferences.
Participants were asked to complete four questionnaires (BDI, BAI, MADRS and QOLI), presented wither with one entire questionnaire on one web page, or on multiple web pages. Hence, the questionnaire were either answered on a total of four web pages or on 83 web pages.
Results show that a majority of participants in each of the ten diagnostic groups preferred the single item per page presentation format:
     • among the participant in the depressed group, over 90% of respondents for each group (MM, SS, MS, SM) preferred one item to a page;
     • among the participant in the panic disorder group, over 70% of respondents for each group (SM,MS) preferred one item to a page;
     • among the participant in the social phobia group, over 60% of respondents for each group (SM,MS,MM,SS) preferred one item to a page. [14]
In mobile surveys remember to consider the four most important things: small screen size, data entry method and interaction style, mobile context and chosen implementation for the questionnaire. [43] App Questionnaire Small screen can affect responding. In the survey, when response options extended beyond the screen, some respondents (23%) reported not having seen them. When part of the question providing strikingly different information extended beyond the screen, some respondents (5%) seemed to use only the first part of the question. [17]
Choose app questionnaire when the topic is personal and delicate. [44] App Questionnaire The aim of the study is to determine how the type of device influences completion rate and quality of response in questionnaires with sensitive topics.
In the experiment, a questionnaire with sensitive topics was distributed in two consecutives waves. In the first wave, one part of the sample must complete the questionnaire using PC browser, while the rest of the sample must complete it on a mobile browser. In the second waves, those who participated in the mobile survey in the first wave must complete the questionnaire via PC, and vice versa.
Results show that the type of device influences completion rate:
     • In the first wave, the completion rate to the PC survey (73.8%) is significantly higher than in the mobile survey (31%). Also, the break off rate in the mobile questionnaire (13.6%) is five times higher compared to the PC questionnaire (2.9%).
     • At the same way, in the second wave, the completion rate to the PC survey (88.4%) is significantly higher than in the mobile survey (38%). Also, the break off rate in the mobile questionnaire (12.7%) is 12 percentage points higher compared to the PC questionnaire (1%).

Also study’s results show that 45% of mobile respondents complete the survey outside the home, compared to 29% of PC respondents. Those completing the survey on a mobile device reported lower levels of perceived privacy (63%) than those who completed it on a PC (75%). Thus, study shows that users tend to trust in data confidentially more when they complete the questionnaire on a PC rather than via cell phone. [15]