Types of evaluation

Evaluation is often divided into three aspects:

  • Formative evaluation helps shape the project. It includes activities such as gathering information on the audience that might help determine the best way to present information. Formative evaluation begins early in the development of the project and may be collected in phases as the project develops. Formative evaluation is often carried out with representatives of the intended audience, but may also explore the needs and skills of the researchers and wider project delivery team. 
  • Process evaluation is used to improve the project. This type of evaluation explores what works well and what could be done differently to improve the project next time. It may focus on the delivery team, but it is also useful to consider the experiences of the audience (for example, venue choice or clarity of materials might be explored). 
  • Summative evaluation seeks to understand whether the project has met the stated objectives. Summative evaluation often focuses on the audience or participants, but for some projects it is also useful to explore impact on the delivery team.

How much evaluation should I do?

The amount of evaluation you do should be in proportion to the size of the project. The exception to this is pilot projects that use new methods. Proportionately, these warrant more evaluation than projects that use more thoroughly tested methodologies.

Two common approaches are:

  • KAB model: impact is measured on Knowledge, Attitude and/or Behaviour. In this model there is an assumption that achieving a change in knowledge is easier than achieving a change in attitude. Likewise achieving a change in behaviour is most difficult of all.
  • Kirkpatrick's evaluation model: impact is considered in terms of Reaction, Learning, Behaviour, Results. Evaluation might focus on the initial 'reaction' of the audience, participants' 'learning', changes in 'behaviour' or 'results' which is a longer term measure, such as improvements in exam results. To find out more, visit the Kirkpatrick Partners website.

In deciding where to focus your evaluation efforts, consider:

  • How much effect might your activity have on the audience? 
  • Over what time period can you follow the participants? 
  • How significant is your activity likely to be in shaping attitudes or behaviour?

Data to use

Both quantitative and qualitative data can usefully contribute to a programme evaluation. The balance between the two is likely to be determined by your objectives for the evaluation and the available budget.

  • Quantitative data – provide measures, for example of how many people attended the event and what they thought of it. The same questions should be used throughout the evaluation and responses gathered from a representative sample. Often all participants are asked to fill in a questionnaire (sometimes before and after if knowledge change is being measured). 
  • Qualitative data – seek to illuminate individual experiences and provide additional subjective context to the evaluation. These data explore the participants’ experience in more depth than quantitative data. Methods vary from the inclusion of open questions on a survey to interviews. Sampling should include a cross-section of participants. 
  • Observational data – exploring how people participate in an event can be illuminating. Did they participate in all events/activities? Were some aspects more popular than others? How did people interact with your website or display? A clear idea of what you are looking for is important when structuring observational data. 
  • Project team records – keeping an evaluation journal allows the programme team to explore and reflect on the process of developing and delivering the project.