On January 18th 2024, I presented a seminar as part of the CCRI seminar series entitled “Participatory Governance for Scaling-up Nature-based Solutions: Social Science Insights from a fast-paced, impact-focused interdisciplinary project.”.
In the seminar, I discussed how, within the context of dual biodiversity and climate crises, how Nature-based Solutions (NbS) address societal challenges with multiple benefits for people and nature. Advocates argue that NbS fundamentally have ecological and social goals, and as such have attracted considerable attention from national to international scales. However, there were concerns about the potential exclusion and marginalisation of local communities and other groups. Participatory and democratic approaches are often promoted as an antidote to these issues, ensuring more equitable and inclusive decision-making outcomes.
The seminar explored how different framings and messages around NbS can work to open up and close down opportunities for participation, and the implications of this for delivering multiple socio-economic and ecological outcomes in a way that is equitable and sustainable. It presented insights from a fast-paced, solutions-focused interdisciplinary project at the University of Oxford on Scaling-up Nature-based Solutions in the UK. This work was part of a larger NERC-funded project called the Agile Initiative, aiming to revolutionize how research responds to urgent global environmental policy and practice challenges, and the Leverhulme Centre for Nature Recovery. In particular, I focused on the social science contributions, reflecting on also the lessons learned from contributing social science expertise to a fast-paced, solutions-oriented interdisciplinary project at the science-policy interface.
Watch the seminar on YouTube and download the PowerPoint slides below.
Are you a practitioner interested in engagement and participation for Nature-based Solutions? Sign up to a free webinar in February 2024 which launches our new guidance, the Recipe for Engagement. Sign up here.
The evidence-backed benefits of participatory and democratic processes include: building trust and integrity, enhancing the perceived credibility of decisions and decision-making institutions; negotiating political divisions and polarisation, promoting solidarity and togetherness; improving socio-economic and environmental outcomes through more plural, flexible and anticipatory governance processes; enhanced quality of knowledge and evidence through the incorporation of diverse knowledge types and realities; and fostering empowerment, collective action, and community benefits through localised and bottom-up approaches.
The talk was part of the DLUHC 2023 Science Seminar Series, curated by the Chief Scientific Advisor’s Office, which aims to seamlessly integrate scientific evidence into DLUHC’s focus areas, aligning with their research interests and priorities. Our aim was to bridge the gap between academic research and real-world applications in the realm of urban planning and regeneration, housing, and fostering sustainable, thriving and connected communities.
Our core message revolved around the power of ‘engagement’, which is part of broader transformative efforts for more participatory and deliberative democracy and justice. We underscored the significance of involving the public in decision-making processes concerning local places and communities. The evidence we presented shed light on the connection between engagement and the establishment of trust, inclusion, and integrity in political decision-making.
A key highlight of our presentation was the exploration of digital tools for engagement. This was particularly relevant to DLUHC’s initiatives for digital planning and ‘PropTech‘ which aim to promote innovative tools and technologies for citizen engagement with planning policy and practice. We delved into both the technical and ethical aspects of technological innovation, offering insight into their application. While digital tools can undoubtedly enhance the effectiveness of engagement in many ways, it is crucial to be cautious about the ethical risks, such as issues related to digital literacy and infrastructure. Our recent academic paper pre-print (free to download) explores these technical and ethical debates around digital tools for democratic and participatory engagement, making relevant recommendations for practitioners and policymakers.
The seminar also emphasised the necessity of embedding a culture of democratic engagement within DLUHC, and also across Government more broadly. We stressed the importance of building the capacity and capability to implement best practices in engagement processes, ensuring decisions align with development, sustainability and local community needs.
Our research gains particular relevance in the rapidly evolving landscape of democratic and digital transformation in the UK. With increasing calls for democratic reform and citizen participation, and an ever-growing toolkit of digital technologies and platforms at our fingertips, the dynamics of planning and environmental decision-making are undergoing a significant shift. On a global scale, influential organizations like the OECD and the European Union are promoting digital tools as catalysts for fostering more interactive, human-centred approaches. Closer to home, the United Kingdom is making bold strides in digital transformation, positioning digital technologies as the front and centre of public service provision and engagement.
Our presentation not only enriched the learning of DLUHC staff, but is also available for viewing to the broader UK public sector. The presentation slides can also be downloaded here.
In a world where democratic reform, sustainable transformations and community empowerment has never been more critical, our seminar served as a reminder of the pivotal role that engagement plays in driving standards of trust, accountability, and inclusion in decision-making and public institutions.
This blog post has been adapted from its original version, posted here.
This blog post has been reposted from my old blog (original post March 2022)
I have been using automated transcription software Otter.ai throughout my 3-year PhD to facilitate data collection and analysis. This tool has been indispensable for transcribing events (e.g. workshops and conferences), in-depth interviews and focus groups with research participants, meetings with colleagues, and much more.
If you’re new to using automated transcription, navigate to my previous blog posts which offer an introduction and tutorial. Importantly, automated transcription comes with a specific set of ethical and privacy considerations, which you can read more about in this post. Since writing these, I’ve run different talks and workshops on automated transcription – you can read a summary of the key messages from these here, including links to presentation slides and recordings.
In this post, I share some insights and tips from my experience using Otter.ai to generate, edit, and prepare transcripts ready for qualitative analysis. I use qualitative analysis software NVivo by QSR in this example. NVivo helps qualitative researchers to organise, analyse, and find insights in unstructured or qualitative data like interviews, open-ended survey responses, social media content, etc. However, there are lots of other proprietary tools you can use for analysing text, as well as free and open source options such as Voyant Tools. You can also use programming languages like R and Python to conduct text mining and analytics (e.g. see this guide for text mining in R). Of course, computer-aided qualitative analysis isn’t the only way to go and manual coding remains just as important.
The core messages in this post should hopefully be relevant for a broad audience of researchers, regardless of what specific tools and approaches they are using. Equally, while I use Otter.ai in this example, there are plenty of other free and paid tools available in 2022, many of which have pretty similar core features.
1. Edit the transcript
Once you’ve uploaded a recording into Otter.ai (or used the live transcription function) and it has finished transcribing, you’ll need to manually edit it. Despite the fact that Otter does a pretty accurate job at translating the audio to text, it will always need human input to check that there are no mistakes. This is a particularly important consideration for researchers who want to make sure that your participants’ contributions are being accurately represented. It’s also beneficial to spend time going through each transcript to get a ‘feel’ for the data.
So, this first step is to read back through the transcript and correct any mistakes. Different methods will work for different people, but I tend to read through and edit the transcript while listening back to the audio recording at around 1.5x to 2x the speed, slowing and speeding up as is necessary. Now that I’ve been using this method for a long time, it’s become increasingly straightforward and efficient (it takes a few goes to really get used to it!).
The features offered in Otter.ai are particularly useful for editing because you can listen while editing in your internet browser. As shown in the photo below, individual words are highlighted as the audio recording plays. However, do make sure that you have a reliable internet connection to make sure everything saves properly (I’ve learnt this the hard way by losing lots of edited data and having to start again!). If I’m working somewhere with poor WiFi connection, I usually export the edited transcript as a text file at regular intervals, so if the edited transcript doesn’t save properly at least I don’t lose all of my edits.
The key things that I check for when editing include:
Punctuation errors – e.g. full stops, commas, and question marks where they shouldn’t be (or a lack of punctuation in the right places).
Random paragraph breaks – sometimes, for example when a speaker pauses mid-sentence, Otter.ai automatically starts a new paragraph, so it’s worth checking to see if this has happened and merge paragraphs where necessary.
Lack of paragraph breaks – Otter.ai has a tendency to generate long monologues of speech, which might need to be broken up into smaller paragraphs to make it easier to read.
Spelling errors and incorrect words – I find this happens quite a lot when transcribing different accents, when specific names and locations are mentioned, or when abbreviations are used.
Linked to the above, please do carefully check for any words which could be interpreted as rudeor inappropriate– I won’t repeat any here, but I have removed some rather interesting misinterpretations of words from some of my transcripts (!).
Mislabelled speakers – it’s really important to check that Otter.ai has labelled your speakers correctly and not mislabelled anyone (this can happen, for example, when someone interupts someone else mid-sentence, or if two people have very similar sounding voices).
Remove repetition and utterances – in natural spoken language, people tend to repeat words, use filler words (like “uhm”, “ah”, and “like”), and can stop talking or change the course of conversation mid-sentence. While utterances and repetition can be useful to retain in the transcript for some purposes, there are other times when you might want to edit these out.
Removing any identifiers – for research in particular, it’s important to make sure that you protect the anonymity of participants at all times. Because Otter.ai transcribes verbatim, the text will include everything in the conversation (e.g. peoples names, names of businesses, areas, etc.). This is a particularly important consideration when conducting online interviews, for example, when the boundaries between private and professional lives can become blurred (particularly when participants are joining the interview from their home) and you can risk capturing personal information.
One important thing to note is that once you have made edits, the audio will then need time to realign with the text (which doesn’t always happen accurately). This is usually fine because if you notice a mistake with audio alignment in the Otter.ai app, you can always check the text against the original audio recording using the time stamps, but it’s useful to keep this in mind.
2. Annotate the transcript
While editing the transcript, I start to annotate key quotes that I think are useful or interesting for analysis. Usually I have a few research questions and/or themes from the academic literature in mind when analysing data, which helps to guide this process. I also add comments about emerging themes, or anything I think is interesting/relevant for the analysis stage. In Otter.ai, you can highlight text using the “highlight” function (all highlights are then summarised at the top of your transcript, below the title and key words). In addition to highlights, you can add individual comments to sections of the text, which appear in the margin in order.
I won’t go into too much detail here as have covered this in my previous blog posts (e.g. this tutorial and this webinar), but automated transcription software can generate some really useful summaries of your transcript. This is particularly useful if you want to see quickly see some of the (potential) themes in the transcript before conducting more in-depth analysis, e.g. if you’re working on a collaborative project and want to send your colleagues a brief summary. The image below shows the key words which are automatically generated by Otter.ai (which can also be viewed as a word cloud), which shows the words which appear most frequently in the transcript. In this example, you can see from the key words that this webinar was about community engagement in a planning setting. Otter.ai will also tell you the amount of time (%) that each person speaks for in the transcript, amongst other quick insights.
3. Prepare for analysis
It’s very straightforward to export a file from your chosen automated transcription software and move it to qualitative analysis software (in Otter.ai you can export your transcript as TXT, DOCX, PDF file, etc.). I export my files from Otter.ai in .txt format and import them to NVivo by QSR (“import” > “text file”). If you haven’t used NVivo before, there are some great tutorials on YouTube and their website.
Once the files are imported to NVivo, I copy and paste all of my comments from Otter.ai (see previous step) and add them as “annotations”. I do this by finding the relevant words (CTRL+F), highlighting the text in NVivo and adding the annotation (CTRL+A), then pasting the corresponding comment. There might be more efficient ways to do this, but it works well for me – when I’m analysing qualitative interviews, for example, repeatedly going through the transcript really helps to increase my familiarity with it.
Once I’ve added all the comments into the transcript in NVivo as annotations, I then start more in-depth analysis (coding). While I go through the transcript and code it into different themes, the annotations are really useful for highlighting quotes and insights which I may have otherwise overlooked. This is helpful for me because I have a thorough record of the various stages I went through to analyse my data, including emerging themes. If you’re unfamiliar with this software, make sure to check out the numerous free resources and tutorials available online (I’ve pasted a few links below).
I will also add the caveat here that this is just one way that I’ve been using automated transcription with qualitative analysis software, out of many potential approaches which can be facilitated by software (or not). While these tools have been really useful for me, this isn’t necessarily the best or most efficient way of doing it.
I collaborated with Natural England to produce outputs for embedding an evidence-led, best practice culture of engagement. This included delivering recommendations which are useful for any organisation thinking about improving their strategy for public and stakeholder engagement.
The available evidence for best practice public and stakeholder engagement was reviewed in a report and summarised in an accompanying infographic pack. Engagement is a process by which members of the public (or other key stakeholders like local authorities, businesses, and charities) can become involved in decisions which affect their lives. Engagement is essential for healthy democracies and ensuring that people are at the heart of tackling environmental issues, helping us to make better decisions for more sustainable and equitable outcomes for everyone.
The outputs from this research are suitable for anyone who is thinking about engaging, including practitioners, practice enablers, researchers, and policy makers who aim to involve members of the public and other key stakeholders in decision-making processes. While this work was focused on engagement in environmental decision-making, it is more broadly relevant to other areas of research and practice.
The report provides the evidence behind what engagement is and why it is important, what the benefits are, the potential risks of ‘poor’ engagement and how to mitigate them, how different ‘types’ of engagement can provide useful classifications for practitioners, and how practitioners can use theory (i.e., different ways of thinking and knowing) to inform best practice. This includes the challenges and opportunities of engaging during COVID-19 and in an increasingly digitised world, particularly considering the ethical implications of digital technologies.
The report outlines how the available evidence can be used to inform the creation of an evidence-led, best-practice engagement culture. It outlines 10 recommendations which consider engagement strategies, frameworks, standards, models, methods, toolkits (and so forth).
One central message in this review is that ‘best practice’ engagement and its outcomes will vary between different situations. Practitioners should recognise that the quality of the process and outcomes will change depending on the purpose and objectives for engaging, as well as organisational cultures of engagement, institutional capacity, wider socio-economic and political contexts, and the characteristics of participants.
The 10 recommendations in the report are:
1. Engagement is an ongoing process, not just a one-off activity.
2. Take time to understand the local context in which engagement is being carried out.
3. Engage stakeholders in dialogue as early as possible in the decision-making process.
4. Recognise the importance of integrating local and scientific knowledge and implement this in practice.
5. Manage power dynamics effectively, for example by using skilled facilitators who can help marginalised voices be heard and build trust in the process.
6. Think about the length and time scale of the engagement process and how often it might be necessary to engage with participants.
7. Recognise that different (digital/remote and in-person) tools and approaches for engagement will work differently in different situations.
8. Engagement coordinators need to manage participants’ expectations of the engagement process.
9. There are risks to engagement, some of which can be managed or mitigated.
10. Frameworks for engagement need to be institutionalised within organisations as a culture of engagement.