The short answer is that you can’t. That’s not to say you can’t reach them – but rather than trying to force them to engage with you, you need to find a way to engage with them.
The longer answer to the question is that you need to understand your audience better, to work out how. If you understand their motivations, their needs, and how they think and feel about your service, then you’ll find the best way to reach them.
To learn about your audience you need to do some research, and it’s common to see brands and organisations trying to get to know their audiences via online surveys. There’s a low barrier to entry to carry out this kind of research, and anyone can set up a questionnaire, right?
Well, sort of; however you need to be mindful of all of the different ways your research might be biased.
The way you construct the questionnaire can inadvertently bias the answers you are given. Questions can be problematic if the wording is ambiguous or too complex, or the language too technical.
Framing a question can soften the impact of sensitive questions, but it can also lead to response bias if you give away which answer you are hoping to see with a leading question.
The ‘belief vs. behaviour’ bias is interesting, as people can give different responses when asked ‘is it a good idea if..’ or ‘have you ever…’ . In this case researchers need to be clear whether they’re trying to ascertain someone’s behaviour or their beliefs. People can also skew their answers if they think there is a benefit to doing so – the ‘faking good’ (or bad) phenomenon.
If you add in a neutral option, it can provide an easy way out for people with only a moderate leaning in any particular direction and skew the results of your survey. However, forced choice, i.e. failing to provide a ‘don’t know’ answer will also skew your results.
I’m sure there are more ways to bias your results, but this isn’t a post about creating a good survey. My point about questionnaire design is that it’s important to have a clear idea of exactly what you want to know, and test any surveys before you launch them to a wider audience.
A common way of making your research invalid or at least difficult to work with, is by creating bias in the way you seek out responses. I worked on a project recently where it was clear that staff had been asked to complete a survey to ‘bulk the numbers up’; however excluding their answers painted a vastly different picture when looking at the responses to questions.
It’s also common to seek responses via social media – it’s a quick way of getting lots of opinionated people to give you some data! However, are they representative of your core audience?
You need to make sure that you’re asking the people who you’re trying to reach. Sometimes that can feel like a circular problem – how do I reach the people I want to reach to ask them how I’d reach them – and in that case you may need to commission some face to face research.
There’s an idea that you need data from hundreds of people for research to be ‘valid’, but it really depends on what you’re trying to find out. If you’re doing something in the medical field, or trying to canvass opinions to inform policy, then sure, you need a lot of data.
But if you’re just trying to find out if it’s worth investing in a social media strategy, you don’t need that many responses. 20-30 is enough to start seeing patterns, assuming that you haven’t got a skewed research audience in the first place.
At this point I’d start creating personas – archetypes that characterise segments of your audience. I find these really useful, to run workshops with stakeholders, base a strategy on, communications plans, content frameworks, you name it.
I find personas particularly helpful when an organisation is divided in its ideas about how best to move forward. Giving key stakeholders an audience to champion can focus them away from their own ideas and opinions and towards the common goal.
When taking a research-led approach to a project, I always advise people that we’re not looking to confirm their own opinions. There needs to be a real readiness to act upon the research, even if that means (big) change. I’m quite often brought in to help with the ‘digital strategy’ and end up advising a non-digital solution – because that’s the best one.
I recently undertook a project where a leaflet in a GP’s surgery was going to be the best way to reach the audience, because they weren’t going to engage with my client unless they were right under their nose when they were ‘a bit bored’.
If someone had said ‘let’s just do a leaflet’ at the beginning of the project I’d have assumed that wasn’t a wise use of budget, but the research suggested otherwise – and that’s exactly why we do research.