Engagement: Is it Research?
At Delaney + Associates, we follow the IAP2 Planning Protocol. This is a framework designed to create a consistent approach to planning and designing an engagement. After having done this several hundred times, we have a consistent framework that we apply with some modifications based on the needs of the client.
It’s after the engagement, when the reporting and analysis is conducted, that the approaches of facilitators can be less consistent. Some facilitators or engagement professionals simply hand over the findings and leave it to the client to evaluate. For us, this is rarely the case. Our clients want to act on the input, and not having an analysis completed is often a barrier to doing something with it. This means that we prepare final reports regularly. Our approach has been to develop a report based on what was heard (the analysis of the engagement) and what was said (raw notes, survey results and interview responses).
The other day someone said to me, “so you’re a social scientist.” This led me to ask myself: “Is engagement research?”
In many ways, yes, engagement is research. It is seeking to understand or solve a problem, and it reaches out to multiple sources of information to help inform a hypothesis, assumption or idea. It involves a methodology and a need to qualify and/or quantify information.
Certainly there is a framework for planning and designing, and a methodology for evaluating results, but…
To say engagement is research, for me, suggests that stakeholders are scientific subjects. That the engagement is more of a one-way transaction – “what data can I mine from this subject to solve my problem?” – and this is not my personal approach. We think of engagement as more relational and community-building rather than the overarching goal being a set of hard numbers.
Don’t get me wrong – when possible, we provide clients with both qualitative AND quantitative analysis. But when we’re working with potentially hundreds of uniquely-worded verbal or written comments, it’s not usually realistic within our scope of a project to code and quantify each individual response.
After several years of experience, I think it’s fair to say we’ve learned that:
- It’s critical to understand the perspective of the decision maker when it comes to presenting the results of the engagement. If they are strongly in favour of quantifiable data, then it needs to be built into the engagement design. For example, such an engagement would receive input via surveys with no open text fields, and/or other techniques that would result in highly quantifiable data.
- There are levels of analysis. For example, it is relatively straightforward when evaluating qualitative input, such as interviews or focus groups, to report that the majority of respondents support A versus B or that there are concerns relating to traffic, local business access or parking. However, it is far more labour intensive to quantify that 62% of respondents had serious concerns around parking and, of those, 57% have concerns around local business access. To get to that level of analysis requires coding of issues and having very detailed parameters by which input will be coded.
- It’s important to understand the differences between public engagement and public opinion research (POR). In Canada, POR is a method of understanding the views of a particular demographic on a particular topic. It can be used by government to test the popularity of a proposed policy, or by a consumer product manufacturer to test the viability of a new product. However, there are limited formats used in POR and it is generally used in testing a proposal, draft or concept and so would be firmly rooted in CONSULT on the IAP2 spectrum. It’s always a good idea to stop and ask yourself (and the client) if all they are looking to do is test a concept, because then perhaps a POR approach is more appropriate.
- Start with the end in mind. Whatever the information and reporting needs of the project are, make sure you build them into the design. A beautiful, collaborative, open-space type meeting is not easily going to render you quantifiable results that demonstrate a clear preference for an option. It’s just not designed to meet that output or result. So have a sense of what the final report should look like from the client’s perspective – not from the content perspective, because we don’t want to bias anything – in terms of layout and approach.
So, yes, I think engagement professionals are, in a way, researchers, but we often work with diverse, vibrant communities of people with a great deal to say about a variety of issues and topics, so testing clearly-defined hypotheses is generally not what engagement seeks to do.