Sorry, you need to enable JavaScript to visit this website.

Evaluating communications

How to evaluate the impact of your campaign.

First published
Culture change through communications

Practice points

  1. Consider establishing standard sets of questions and response options that allow comparison over time and across campaigns and forces.
  2. Consider prioritising questions that are as close to the target behaviours as possible (for example, intentions).

Achieving robust evaluations of communications campaigns is usually challenging with the resources and time available. However, there are ways to make best use of the limited resources.

A robust impact evaluation would often involve a randomised controlled trial or quasi-experimental design. However, these require time and resources and can often cost more than the campaign itself.

In most cases, evaluations therefore consist of surveys and focus groups that assess:

  • awareness of campaigns through recognition and recall
  • subjective reactions to the campaign materials
  • attitudes, intentions and self-reports relating to the target behaviours

Standardising designs

To be able to compare campaigns over time and with one another, it is important to standardise the designs as much as possible. For example, you may decide for any new campaign to take a baseline measure before it is launched, followed by a post-launch measure and three-month follow-up.

It is also important to standardise and prioritise measures that are as close to the behaviours of interest as possible, even if they rely on self-report (rather than observation, which would be more objective but harder to achieve).

Questions that are closest to the behaviours of interest will typically focus on intentions relating to specific scenarios, with response options that allow responders to express a level of commitment about their intentions (Fishman, Lushin and Mandell, 2020). For example, the extent to which they agree or disagree with 'I intend to intervene if I see behaviour that is sexist'.

Questions may also measure behaviour that has or hasn’t occurred already. For example, 'In the past 6 months, I have intervened when I witnessed sexist behaviour'.

Standardising measures

Police force communications teams may benefit from a shared set of questions and response options, so that impact can be assessed consistently and compared across forces. The questions could assess COM-B barriers and facilitators of behaviours (such as those identified in the checklists in the section on Targeting messaging to audiences' capaibilities, opportunities and motivations) as well as self-reported behaviours (Government Communication Service, 2021). 

There is a methodology to addressing issues of self-report bias that can be applied when designing questions and response options, for which it may be worth seeking expert input (Althubaiti, 2016; Kreitchmann and others, 2019).

Existing validated questionnaires can also be used instead of developing new questions from scratch – although sometimes these can be impracticable because of their length. For example, this may include using questionnaires that aim to explore culture (Glick, Berdahl and Alonso, 2018; Queiros and others, 2020) and wellbeing more generally, as well as those that focus specifically on experiences of sexism at work (Warren and others, 2023; Oswald, Baalbaki and Kirkman; Salomon and others, 2020).

It is also worth drawing on routinely collated data, such as complaints about sexist behaviours. However, these have to be interpreted with caution, given that communications campaigns may increase the reporting rate, rather than the incidence of the sexist behaviours themselves.

References

Was this page useful?

Do not provide personal information such as your name or email address in the feedback form. Read our privacy policy for more information on how we use this data

What is the reason for your answer?
I couldn't find what I was looking for
The information wasn't relevant to me
The information is too complicated
Other