Recently we blogged about how to avoid the pitfalls of annoying your survey respondents – top 6 pet peeves. We noted that overly long surveys were the single biggest bugbear for our members.
We thought it would be worthwhile to develop the issue of over-long surveys in greater detail and look at the impact and implications of stretching respondents’ attention spans to snapping point. And to offer some advice based on the feedback we receive from our panellists.
We took a look at two recent surveys of varying lengths that had been scripted and hosted on our mobile-friendly platform. We compared the respondent open feedback and star ratings (we ask for both at the end of every survey we host), speed of fieldwork, straight-lining and mid-survey drop-out levels. Here’s what we found:
Our analysis is not scientific of course (based on a sample of two!) but instead an illustrative view of how survey length can impact the outcome of your research project. It’s worth noting that the topic and question structure of the long and short survey are not the same, which can have an impact. Nonetheless, we do see similar effects on most surveys over 20 minutes.
The truth always comes through in feedback, which is why we always collect and analyse. From the beginning to the end of fieldwork, we always read panellist feedback to check for any issues that respondents might be experiencing from questionnaire design to the technical issues that may arise.
“[it] went on way too long ...I lost interest” (Sophie, The Student Panel)
“quite long to give detailed and decent answers” (Jake, The Student Panel)
“It was not too long which made it easier to answer. Questions were clear…” (Will, The Student Panel)
“Clear and concise, easy to use and complete” (Tara, The Student Panel)
Respondent comments from the long survey v short surveys highlight the frustrations that respondents feel when completing lengthier questionnaires (and conversely the satisfaction of short ones). The sentiments for both types of surveys were repeated throughout the Feedback section. When we coded the open feedback in both surveys (as in the table above) we found that there was three times more positive feedback in the shorter survey than the longer, with 20% for the 30-minute survey versus 60% in the ten-minute survey.
Apart from open-form text, we also ask respondents to rate their in-survey experience on all surveys, which we’re now starting to share with clients. Unsurprisingly, the ten-minute survey was rated significantly higher than the 30-minute survey, with the shorter one achieving a star rating of 4.3 and the longer a star rating of 2.8 (where 1 = very poor and 5 = very good).
Straight-lining (where respondents speed through questions), especially towards the end of surveys, is a common problem with lengthier surveys. And it doesn’t imply bad faith on the respondents’ part; it just means they are getting bored. It makes your data noisy and more equivocal. It is unlikely algorithms can spot all straight-lining. So if you want your findings to be robust, incisive and insightful, it’s imperative to think about respondent engagement! Predictably, in our analysis we noticed a higher level of straight-lining in our longer survey than the shorter one, with four times as many straight-liners being found in the 30-minute survey compared to the ten-minute survey.
We understand the time pressures that face many of our clients. However, it’s commonly forgotten how much of an impact survey design can have on fieldwork times. Our analysis clearly shows this, with our shorter survey having approximately three times the number of completes achieved per day compared to the 30-minute survey. A longer survey invariably results in a higher mid-survey dropout rate which means that more respondents have to be invited to get the required survey completes. Can you afford to have this happen when looking for niche sample groups? And even more importantly: given the pressure research executives are under to add value and spend time on analysis, extended fieldwork timelines just cut into good analysis time. Who can afford that?
Mid-survey drop-out levels
We pride ourselves on having low dropout rates across all our surveys. This is in large part due to the fact that we are a youth specialist research house (as well as Panel Services provider) so frequently can run highly relevant and well-crafted surveys aimed at our 16-30 audience. It’s also attributable to our award-winning mobile-optimised survey template which ensures a better respondent experience. That said, in our analysis we still found a higher than average mid-survey dropout rate with our longer survey; with 40% who started not completing in comparison to 12% for our shorter survey. Do high drop out rates matter if you’re still able to complete fieldwork? The short answer is yes; it reduces the survey validity and means and together with our points on survey diligence, mean the resultant findings are 'noisier' and less incisive.
So what can you do if you’ve got a long survey?
Here are three golden rules:
1. Keep it under 20 minutes
- Are all the questions in the survey are necessary?
- Remove demographic questions which are held on the panel; these can always be provided after fieldwork.
- Are you running bad questions just to keep time series? It might be time to re-consider and break with the past.
2. Make it engaging
- Make it as easy and as enjoyable to complete as possible.
- You don’t have to gamify. Simple routes to a more engaging survey could include images, interactive icons and even improving questionnaire wording.
3. Be absolutely honest in communication
- How long the survey is going to take? Don’t be tempted to cut this number when you communicate it.
- Prepare respondents for a likely screen-out rate.