Close

Step 6+: Measuring impact and beyond

Your progress so far

Before beginning this step, you should have already completed:

  • Theory of change
  • Logic model
  • Blueprint
  • Implementation & process study
  • Pilot impact study

If you have not yet completed these steps, please do so before moving on to impact evaluation.

Robust impact evaluations allow you to confidently state whether your intervention or service has had an impact on children’s outcomes (establishing causality) and how much of an impact it has. They do this by isolating the effect of an intervention from all the other factors that could impact on the outcomes you are evaluating.

Good impact evaluations analyse data collected at two time points –  at a minimum, data is collected before and after the intervention or service – and for two different groups, one which received the intervention and one which did not. This approach is only considered robust if the two groups being compared are broadly similar in all important respects. Random assignment, when done properly, helps to ensure equivalence between the two groups being compared, and means that any difference observed in the post-test scores should be attributable to the intervention rather than to differences in the characteristics of the two groups.

In our view, the only acceptable designs for measuring impact robustly and establishing causality are randomised controlled trials (RCTs) or quasi-experimental designs (QEDs). RCTs involve randomly assigning participants into an experimental group (who receive the intervention) or a control group, who do not. QEDs take non-experimental data and transform it using sophisticated statistical techniques to allow comparison between those who received an intervention and those who did not, and to test for causality.

Are you ready for impact evaluation?

Impact evaluations require a significant amount of resource and technical expertise. They are time-consuming and expensive to run. For these reasons, you will need to be sure that your service or intervention is ready for this step. We therefore recommend that you are able to confidently answer ‘yes’ to the following eight questions before you consider yourself ready to move onto robust impact evaluations.

  1. Does the intervention or service aim to improve outcomes that are important for children’s development (see step 1)?
  2. Does it have a clearly defined target population that specifies the age of the children and their level of need (see step 1)?
  3. Is it likely to provide measurable value for this population over what is currently available (see step 1)?
  4. Can it be feasibly delivered to a high standard (see step 4)?
  5. Are the intervention’s costs known, and are they proportionate to the intervention’s intended outcomes (see step 4)?
  6. Is it possible to recruit participants from the target population at sufficient numbers to justify the cost and time of the intervention (see step 4)?
  7. Is it possible to retain participants from the target population at sufficient numbers to justify the cost and time of the intervention (see step 4)?
  8. Does the intervention have preliminary evidence from a pilot impact study meeting the minimum step 5 requirements of improving at least one of its intended child outcomes (see step 5)?

So, you are ready for impact evaluation

If you can confidently answer yes to all these questions, it is likely you are ready to robustly measure the impact of your intervention or service. There are numerous ways for doing this. In our guide ‘10 steps for evaluation success’ we talk about randomised controlled trials: step 6 focuses on efficacy trials (used to determine if there is an impact under ideal conditions) and step 7 focuses on effectiveness trials (used to determine if the results from an efficacy trial can be replicated in real-world conditions).

These methods require significant technical expertise to undertake correctly and are highly resource intensive. For these reasons, we strongly recommend that you buy in this expertise by commissioning an external evaluator with experience in impact evaluation, such as a local university, to independently undertake the evaluation.

  • Guide
  • 2019
  • EIF
  • key resource

10 steps for evaluation success

Our 10-step guide to help you progress along the evaluation journey. This landmark guide provides further details on steps 6 to 10.

  • Guide
  • 2019
  • EIF
  • key resource

Evaluating early help: A guide to evaluation of complex local early help systems

Six key principles for undertaking evaluations within complex local systems.

  • Guide
  • 2018
  • EIF

Evaluating early intervention programmes: Six common pitfalls, and how to avoid them

Guidance on how to address six of the most common issues we see in our assessments of programme evaluations.

  • Guide
  • 2017
  • EIF

Early intervention into action: Innovation and evaluation

A set of case studies of local areas that have made changes in how they work and put in place an evaluation to help them capture and learn from the impact of those changes.

  • Guide
  • 2017
  • EIF

Evaluating early intervention at the local level: Five rules of thumb

Five rules of thumb that local areas should adhere to when undertaking or commissioning evaluation.

Back to top