skip to content


This page is intended for use by students and researchers in the University of Cambridge Schools of Technology and Physical Sciences whose research involves collecting data from people using questionnaires or interviews. It is part of a larger set of research guidance pages on work with human participants.

Ethical review guidance

This page gives general guidance relating to conduct of surveys and questionnaires. The following issues are particularly relevant with regard to ethical review:

  • Recruitment
  • Anonymity
  • Data retention
  • Incentives and Compensation
  • Permission


Any study where you collect data by asking people questions is a survey. This can be conducted using paper questionnaires, email, web-based survey forms, or occasionally by telephone or in public spaces. The people who participate in a survey are generally called 'respondents'.

A study in which you analyse data about people and their activities that has been collected without contacting them individually, or specifically asking them to respond to questions, is described as data research. See that page for further details.

A study in which you ask people to keep records of their daily life, or their usage of some technology, is described as a diary study. These are described in more detail on the page about diary and probe studies.

Practicalities - Surveys

Questionnaire design

Questionnaires generally include a combination of closed questions (predetermined responses, either yes/no or multi choice), Likert scales to indicate strength of agreement with a statement, and open questions (free text, which must be coded for analysis).

It is easy to make serious errors when you first attempt to design a questionnaire. There are many textbooks and online guides - make use of them. If possible, ask an expert to review a prototype of your questionnaire, and try it out in advance with several pilot respondents. Typical traps include biased questions, ambiguous questions, poor 'guard' logic, inconsistent response formats, failure to anticipate some valid answers, or reasons for not giving an answer.


Who do you want to respond to your survey, and is this sample expected to be representative of a larger group? Most surveys are initiated from some database or email list. You should ensure that using it in this way is consistent with the terms of use, including any Data Protection Act considerations. You need to check this with the owner of the list.

It is possible to recruit directly by telephone or pedestrian samples. These approaches are stressful and time consuming, and should only be attempted with expert guidance and preparation.

Incentives and compensation

Not everyone who you ask to complete a survey will do so. It is reasonably common to encourage survey responses by offering a gift or other incentive to randomly selected respondents. This often requires that you collect contact details, which raises issues of anonymity as below.


Most surveys are anonymous - they do not record either the name of the respondent, or the name of any institution that the respondent represents. This can be inconvenient if you realise that you need more data after collecting responses (either clarification, compensation for errors in the survey design, or investigating subsequent research questions). Nevertheless, we advise to make surveys anonymous whenever possible.

If it is essential to contact respondents subsequently, it may be acceptable to request an email address, but this should be optional. If email addresses are collected, your data will then be subject to the terms of the Data Protection Act

Many surveys incorporate demographic data (age, gender, education etc). This should be minimised - you should not collect any demographic data unless it is related to a specific research question. Demographic data may include personal details that would bring your research within the terms of the Data Protection Act, in which case precautions noted below must be taken.


There are a range of tools for administering online surveys. You should check whether you can extract all your data, and whether they have any limit on number of responses. The most popular at the time of writing, SurveyMonkey, does have a limit. SurveyBob has no limit, but displays advertising on some pages.

Data Retention

If survey responses do not include any personal data, then the data may be retained. If they do contain personal data, then they fall within the terms of the Data Protection Act. Personal data should be kept secure (see data security below). Data that would allow a respondent to be identified should be kept in a separate place throughout the research project, with an anonymised code used during analysis work and at publication time. It is good practice to destroy any personal data after a stated period of time.

Informed consent

In general, voluntary completion of a questionnaire or interview can be taken as consent for this data to be used in research. Nobody should ever be compelled to participate in a research survey (for example, students should not be required to participate in research as a condition of course grading). You may wish to assure respondents that no personal data is collected, or if it is collected, that it will not be published, and will be destroyed

Advice on Survey Validity

Sampling, Response Rate and Selection

You will want to make claims, in presenting your research, that your results are applicable to a larger number of people beyond those that responded - that you had a representative sample. How can you justify that your recruitment database was genuinely representative? Only a subset of those in the database will have responded (often between 5 and 50%). Can you be sure that those who didn't respond would have given the same answers (what are the reasons they didn't respond - might these be related to any of the questions)?

Coding and Analysis

Closed questions can be used as a basis for statistical comparisons, either investigating differences between groups within your sample, or correlations between responses to questions. Survey responses are not generally particularly sensitive measures, so the statistical techniques available might not be straightforward.

Single value statistics have little research relevance unless they can be related to an external comparison or prior hypothesis. (30% of respondents said they liked your product - but what would they have said about a different product?)

Where your survey included open questions, how will you draw conclusions about patterns or trends across their answers? This will involve creating a set of coding categories, assigning each answer to one or more categories, and dealing with those that fall outside the coding scheme, are ambiguous and so on. You should probably get a second person to re-code the same data, and make a statistical inter-rater reliability analysis.

It may be the case that you did not have prior research hypotheses relating to some open questions. In this case, it can be valuable to follow a rigorous process by which codes and potential theoretical concerns are derived from the collected data (for example, Grounded Theory methods). However, these are time-consuming. It is unwise to collect large amounts of verbal data without having a firm plan in advance of how it will be analysed.




The initial version of this page was drafted by Alan Blackwell. 

All comments and feedback are welcome. Please send any feedback to