Participation, democracy, sustainability

Category: Uncategorized

Governing nature recovery in Scotland: growing transformative change

Rewilding on the banks of Loch Ness (source: Caitlin Hafferty)

Our study examines how governance influences transformative change in nature recovery initiatives. Part of the Leverhulme Centre for Nature Recovery at the University of Oxford, our research explores the governance conditions that enable or constrain the transformative potential of nature recovery for delivering simultaneous community, biodiversity, and climate benefits.

Successful nature recovery involves benefitting people and fostering deep connections between humans and nature. This requires embracing local, indigenous, and scientific knowledge through holistic, integrated, and participatory decision-making processes. However, current approaches risk perpetuating norms and practices which may exacerbate inequalities and injustices, hindering the necessary changes for transformative human-ecological well-being. In particular, there is considerable opportunity to investigate the role of natural and private capital, along with supportive policy mechanisms, in achieving scientifically robust ecological and climate goals while addressing social risks, enhancing community benefits, and strengthening local democracies. There is a continued debate around how natural capital markets support high-integrity and equitable nature recovery, including the risks or trade-offs for promoting meaningful community participation and empowerment. It is important to explore the careful balance of collaboration at the local level while managing broader priorities, incentives, standards, and regulations.

We aim to understand, examine, and advocate for inclusive governance frameworks that actively address inequalities and promote collaboration to tackle the biodiversity and climate crisis, build community wealth and a circular economy, strengthen democracy and social justice, ensure a just transition, and diversity land ownership. Key questions will revolve around:

  • Views on the interactions between community and socio-economic benefits and nature recovery initiatives in Scotland.
  • How local knowledges, values, and community benefits can be captured, mapped, and integrated into monitoring, evaluation, and broader decision-making processes for landscape and ecological change.
  • Emerging opportunities for and tensions between mechanisms for financing, incentivising, and certifying nature recovery and delivering community benefits, strengthening local democracies, and contributing to a ‘just’ transformation.
  • How we can achieve the balance between high-integrity, credible, and scalable nature recovery and delivering democratic, socially inclusive forms of governance that are place-based and locally sensitive, while aligning with broader standards and priorities.

Overall, our study investigates how politics and governance – including framings, actors, and institutional dynamics – shape transformative change in nature recovery. We will explore how different governance and institutional arrangements affect social and ecological outcomes, drawing initially from case study landscapes in Scotland, and grounding these in national and international debates. This project ultimately aims to understand, promote, and embed new ideas and pathways for reimagining and remaking the future to support justice and well-being for humans and nature. In doing so, it aims to deliver conceptually-driven, pragmatic and actionable options for policy-makers and practitioners on how to ‘grow’ transformative change through nature recovery.

If you are involved in nature recovery initiatives in Scotland, we would love to hear from you!

We are conducting a study on the governance conditions that enable and constrain the transformative potential of nature recovery initiatives for meeting multiple ecological, climate, and social objectives. Your insights, expertise, and experiences will help us understand how we can grow transformative pathways that support justice and well-being for both humans and nature. Your contributions may also help inform pragmatic and actionable recommendations for policy-makers and practitioners in Scotland, across the UK, and beyond.

We appreciate that participating in academic studies takes time, and we believe it is important that research relationships are participatory and reciprocal with a positive, mutual exchange of benefits. If there is anything that the research team can do to help you in return, please do let us know!

Research team:

Caitlin Hafferty (Leverhulme Centre for Nature Recovery, University of Oxford).


Caitlin will be occasionally joined by two MSc students from the School of Geography and the Environment.

Who can participate?

Anyone involved in working with nature to benefit both people and biodiversity in Scotland are warmly invited to participate. This includes anyone involved in the policy, financing, strategy, design, and/or delivery of a range of nature recovery projects like rewilding and restoration, marine and peatland restoration, urban greening, species introduction and management, community-led conservation, and more. We welcome insights from a range of private, public, and civil or community-led initiatives, including blended finance collaborations and community wealth building.

We are initially interested in participants involved in nature recovery projects in Argyll and Bute, Inverness-shire, and Aberdeenshire, however also welcome broader perspectives across Scotland, the UK, and further afield. Please feel free to get in touch if you have any questions about your eligibility to participate.

What’s involved?

A 45-60 minute interview conducted in-person or online. Interviews will be semi-structured, informal and conversational, and can be conducted at a time and location to suit you. All interviews will be confidential and used solely for research purposes.

The research team will be based in Scotland and able to conduct in-person interviews in Argyll and Bute, Inverness-shire, and Aberdeenshire over specific periods of time in 2024.

In-person interviews are being conducted in Inverness-shire between the 20th June and 10th July 2024. The research team would love to visit a diversity of nature recovery projects in the area and conduct interviews while walking around the site. Alternatively, sit-down interviews (e.g., in a café) or online interviews can be conducted flexibly.

Further dates for in-person interviews in Inverness-shire, Argyll and Bute, and Aberdeenshire, are to be confirmed. Please get in touch with Caitlin to discuss.

How to enquire & more information

Please contact Caitlin Hafferty ( for more information or to arrange an interview. Please include a brief description of and/or link to your nature recovery project/s or organisation. We look forward to hearing from you!

For more information about the project, what to expect, what happens to data provided, and more, you can view and/or download the project information sheet in the PDF below.

From automated transcription to qualitative analysis: 3 easy steps

This blog post has been reposted from my old blog (original post March 2022)

I have been using automated transcription software throughout my 3-year PhD to facilitate data collection and analysis. This tool has been indispensable for transcribing events (e.g. workshops and conferences), in-depth interviews and focus groups with research participants, meetings with colleagues, and much more. 

If you’re new to using automated transcription, navigate to my previous blog posts which offer an introduction and tutorial. Importantly, automated transcription comes with a specific set of ethical and privacy considerations, which you can read more about in this post. Since writing these, I’ve run different talks and workshops on automated transcription – you can read a summary of the key messages from these here, including links to presentation slides and recordings. 

In this post, I share some insights and tips from my experience using to generate, edit, and prepare transcripts ready for qualitative analysis. I use qualitative analysis software NVivo by QSR in this example. NVivo helps qualitative researchers to organise, analyse, and find insights in unstructured or qualitative data like interviews, open-ended survey responses, social media content, etc. However, there are lots of other proprietary tools you can use for analysing text, as well as free and open source options such as Voyant Tools. You can also use programming languages like R and Python to conduct text mining and analytics (e.g. see this guide for text mining in R). Of course, computer-aided qualitative analysis isn’t the only way to go and manual coding remains just as important. 

The core messages in this post should hopefully be relevant for a broad audience of researchers, regardless of what specific tools and approaches they are using. Equally, while I use in this example, there are plenty of other free and paid tools available in 2022, many of which have pretty similar core features.

1. Edit the transcript

Once you’ve uploaded a recording into (or used the live transcription function) and it has finished transcribing, you’ll need to manually edit it. Despite the fact that Otter does a pretty accurate job at translating the audio to text, it will always need human input to check that there are no mistakes. This is a particularly important consideration for researchers who want to make sure that your participants’ contributions are being accurately represented. It’s also beneficial to spend time going through each transcript to get a ‘feel’ for the data. 

So, this first step is to read back through the transcript and correct any mistakes. Different methods will work for different people, but I tend to read through and edit the transcript while listening back to the audio recording at around 1.5x to 2x the speed, slowing and speeding up as is necessary. Now that I’ve been using this method for a long time, it’s become increasingly straightforward and efficient (it takes a few goes to really get used to it!).

The features offered in are particularly useful for editing because you can listen while editing in your internet browser. As shown in the photo below, individual words are highlighted as the audio recording plays. However, do make sure that you have a reliable internet connection to make sure everything saves properly (I’ve learnt this the hard way by losing lots of edited data and having to start again!). If I’m working somewhere with poor WiFi connection, I usually export the edited transcript as a text file at regular intervals, so if the edited transcript doesn’t save properly at least I don’t lose all of my edits.

A screenshot of the browser interface showing how words in the transcript are highlighted as the audio plays, speaker labelling, and the speed settings. (Transcript source: public webinar “Engaging for the Future”, Commonplace).

The key things that I check for when editing include:

  • Punctuation errors – e.g. full stops, commas, and question marks where they shouldn’t be (or a lack of punctuation in the right places).
  • Random paragraph breaks – sometimes, for example when a speaker pauses mid-sentence, automatically starts a new paragraph, so it’s worth checking to see if this has happened and merge paragraphs where necessary.
  • Lack of paragraph breaks – has a tendency to generate long monologues of speech, which might need to be broken up into smaller paragraphs to make it easier to read. 
  • Spelling errors and incorrect words – I find this happens quite a lot when transcribing different accents, when specific names and locations are mentioned, or when abbreviations are used. 
  • Linked to the above, please do carefully check for any words which could be interpreted as rude or inappropriate – I won’t repeat any here, but I have removed some rather interesting misinterpretations of words from some of my transcripts (!).
  • Mislabelled speakers – it’s really important to check that has labelled your speakers correctly and not mislabelled anyone (this can happen, for example, when someone interupts someone else mid-sentence, or if two people have very similar sounding voices). 
  • Remove repetition and utterances – in natural spoken language, people tend to repeat words, use filler words (like “uhm”, “ah”, and “like”), and can stop talking or change the course of conversation mid-sentence. While utterances and repetition can be useful to retain in the transcript for some purposes, there are other times when you might want to edit these out.
  • Removing any identifiers – for research in particular, it’s important to make sure that you protect the anonymity of participants at all times. Because transcribes verbatim, the text will include everything in the conversation (e.g. peoples names, names of businesses, areas, etc.). This is a particularly important consideration when conducting online interviews, for example, when the boundaries between private and professional lives can become blurred (particularly when participants are joining the interview from their home) and you can risk capturing personal information. 

One important thing to note is that once you have made edits, the audio will then need time to realign with the text (which doesn’t always happen accurately). This is usually fine because if you notice a mistake with audio alignment in the app, you can always check the text against the original audio recording using the time stamps, but it’s useful to keep this in mind.

2. Annotate the transcript 

While editing the transcript, I start to annotate key quotes that I think are useful or interesting for analysis. Usually I have a few research questions and/or themes from the academic literature in mind when analysing data, which helps to guide this process. I also add comments about emerging themes, or anything I think is interesting/relevant for the analysis stage. In, you can highlight text using the “highlight” function (all highlights are then summarised at the top of your transcript, below the title and key words). In addition to highlights, you can add individual comments to sections of the text, which appear in the margin in order.

A screenshot of the browser interface showing how you can highlight and comment on text. (Transcript source: public webinar “Engaging for the Future”, Commonplace).

I won’t go into too much detail here as have covered this in my previous blog posts (e.g. this tutorial and this webinar), but automated transcription software can generate some really useful summaries of your transcript. This is particularly useful if you want to see quickly see some of the (potential) themes in the transcript before conducting more in-depth analysis, e.g. if you’re working on a collaborative project and want to send your colleagues a brief summary. The image below shows the key words which are automatically generated by (which can also be viewed as a word cloud), which shows the words which appear most frequently in the transcript. In this example, you can see from the key words that this webinar was about community engagement in a planning setting. will also tell you the amount of time (%) that each person speaks for in the transcript, amongst other quick insights.

A screenshot of the browser interface showing automatically generated key words. (Transcript source: public webinar “Engaging for the Future”, Commonplace).

3. Prepare for analysis

It’s very straightforward to export a file from your chosen automated transcription software and move it to qualitative analysis software (in you can export your transcript as TXT, DOCX, PDF file, etc.). I export my files from in .txt format and import them to NVivo by QSR (“import” > “text file”). If you haven’t used NVivo before, there are some great tutorials on YouTube and their website.

Once the files are imported to NVivo, I copy and paste all of my comments from (see previous step) and add them as “annotations”. I do this by finding the relevant words (CTRL+F), highlighting the text in NVivo and adding the annotation (CTRL+A), then pasting the corresponding comment. There might be more efficient ways to do this, but it works well for me – when I’m analysing qualitative interviews, for example, repeatedly going through the transcript really helps to increase my familiarity with it. 

A screenshot of NVivo by QSR showing one way that comments from can be used to create annotations and themes for analysis. (Transcript source: public webinar “Engaging for the Future”, Commonplace).

Once I’ve added all the comments into the transcript in NVivo as annotations, I then start more in-depth analysis (coding). While I go through the transcript and code it into different themes, the annotations are really useful for highlighting quotes and insights which I may have otherwise overlooked. This is helpful for me because I have a thorough record of the various stages I went through to analyse my data, including emerging themes. If you’re unfamiliar with this software, make sure to check out the numerous free resources and tutorials available online (I’ve pasted a few links below).

I will also add the caveat here that this is just one way that I’ve been using automated transcription with qualitative analysis software, out of many potential approaches which can be facilitated by software (or not). While these tools have been really useful for me, this isn’t necessarily the best or most efficient way of doing it.