Collaborative coding of qualitative data

Coding qualitative data is a huge undertaking. Even with relatively small datasets, it can be a time-consuming and intensive process, and relying on just one person to interpret complex and rich data can leave out alternative viewpoints and risk key insights being missed

Daniel Turner

Daniel Turner

Coding qualitative data is a huge undertaking. Even with relatively small datasets, it can be a time-consuming and intensive process, and relying on just one person to interpret complex and rich data can leave out alternative viewpoints and risk key insights being missed. For one or both of these reasons, qualitative analysis is often performed as a collaborative team, with multiple coders either splitting up the task or providing multiple interpretations and checks on analysis. It is something that Quirkos users have been requesting for a long time, and we are so excited to be releasing it this month. However, let’s look into the reasons that researchers might want to share the coding and analysis process, and some practical and methodological considerations.

There’s a great chapter in the sage Handbook of Qualitative Data Analysis on collaboration by Cornish, Gillespie and Zittoun (2014) . First, they ask who is involved in the analytic collaboration? It could be academic-academic, or involve practitioners, and anything from two to a dozen people. It can also involve ‘lay-persons’ through participatory analysis, something we’ve written about before . Collaborators may also have different roles, for example there may be primary and secondary coders in a hierarchy, or they may have the same role and status in the decision making.

Work can also be assigned in many different ways, depending on the volume of data, researcher time and methodological approach to validity. Sometimes more than one coder will analyse all of the dataset for validity, or sources may be split up so that they only have to code part of the data – especially when there are a large number of sources. Alternatively, another researcher could be used as ‘sense-checking’, reviewing already coded work to see if anything was missed, or questioning interpretation. It’s unusual to split work by theme, with one person coding a single topic area. Since qualitative data has to be read through completely anyway, coders may as well work with all the topics and themes in the project.

Enacting collaborative coding is a process to be managed all of it’s own. Richards and Hemphill (2018) describe this in 6 stages, essentially planning the practicalities of how researchers will contribute, designing a codebook or framework, testing the codebook before coding, then finally a review of the process and results. These basic steps are the same regardless of the analytical approach, and while Richards and Hemphill (2018) describe open and axial coding , pretty much any coding approach will work (eg IPA, grounded theory, thematic or discourse analysis). Often a codebook or coding framework is agreed before coding starts, with all researchers discussing which themes they want to look for in the data, how they will be worded and defined, based around answering the research questions.

However, when using grounded theory or other approaches where codes/themes are generated during the course of the analysis, there is extra need for reviewing codes or themes as they develop. This also needs a set of rules, deciding when new codes will be created, if everyone can do it, guidelines for when to merge or group codes to themes etc.

There is also deciding how collaboration will happen – will it be in person? Remotely? If so, over the phone? Video chat? Directly in qualitative analysis software? The practicalities of this (especially distance and capabilities of the software) will dictate much of what is realistic, but note that collaboration could involve anything from a long structured workshop session to a chat over coffee.

The new live collaboration feature in Quirkos Cloud offers a unique and flexible way to collaborate on qualitative projects. With data stored remotely on the cloud, users can login from any computer, access the project, view, edit and discuss in real time. This eliminates the need to ‘manage’ project files: knowing who has edited a file and when, where it will be stored, and who has worked on it.

However, it also allows flexibility for researchers to contribute to the project as and when they have time. It doesn’t require everyone to work on the project at set times, and this is very useful when working across time-zones (or teaching schedules!).

The live chat feature in Quirkos Cloud is a totally unique feature to Quirkos, that a lot of researchers are excited about. Part of the challenge in analysing qualitative data as part of a team is having a way to log changes and decisions, and to discuss issues with others. The chat function allows teams to do this in a familiar and intuitive way, recording and saving the conversation, user and time of each comment. This acts as an easy to read playback of the analytics process, as well as a place for more informal communication and discussing practicalities or the process.

But the chat feature is also available in the static offline projects too (since they have identical files and features). This means that you can also use the ‘chat’ for a reflexive coding journal, or to keep project level notes and analytic writing attached to the project. I’m really interested to see the different ways users use this new functionality!

There are a couple of other software tools that allow collaboration, but we think that Quirkos Cloud will be the easiest to work with. Nvivo Teams requires server installation, and although pricing is not publicly available, it’s eye-wateringly expensive. Dedoose is a much better cloud platform for collaboration, but it’s reliance on installing unsecure versions of Flash and Air mean that it’s becoming difficult to run.

The collaboration features in Quirkos are free for cloud users, and there is no limit to the number of projects you can share, and the number of users on each project. It’s easy to share a project (you only need the user’s email address), and you can give full editing or just read access to each person as needed. All the projects are securely saved on our cloud servers so the whole team can access them from anywhere, but you always have the option to download the project and work with it offline.

Using the query view, you can get side-by-side views of different team members coding, to see how you have differed and where everyone agrees. While we don’t provide any quantitative measures for inter-rater reliability or agreement, (more on this in a future blog post) it’s still easy to find and discuss differences.

Yet the main benefit will be how whole teams can now collaborate using the simple and visual Quirkos interface. This means it’s quick for everyone to learn, fast to use while coding, and just plain fun to see the highlights and bubbles growing in real-time as people work on the projects. It really connects you to your project and team much more than passing around a static document. The full feature set will be available in Quirkos 2.3 (out in late Feb 2020), but you can try the intuitive visual Quirkos approach to qualitative analysis with a free trial of the offline or cloud version from https://www.quirkos.com/get.html or read more about the features here .

Richards, K. R., & Hemphill, M. A. (2018). A Practical Guide to Collaborative Qualitative Data Analysis, Journal of Teaching in Physical Education, 37(2), 225-231. Retrieved Feb 7, 2020, from https://journals.humankinetics.com/view/journals/jtpe/37/2/article-p225.xml

Cornish, F., Gillespie, A. & Zittoun, T. (2014). Collaborative analysis of qualitative data. In Flick, U. The SAGE handbook of qualitative data analysis (pp. 79-93). London: SAGE Publications Ltd https://methods.sagepub.com/book/the-sage-handbook-of-qualitative-data-analysis/n6.xml

Sign up for more like this.

cross coding qualitative research

Qualitative Data Coding 101

How to code qualitative data, the smart way (with examples).

By: Jenna Crosley (PhD) | Reviewed by:Dr Eunice Rautenbach | December 2020

As we’ve discussed previously , qualitative research makes use of non-numerical data – for example, words, phrases or even images and video. To analyse this kind of data, the first dragon you’ll need to slay is  qualitative data coding  (or just “coding” if you want to sound cool). But what exactly is coding and how do you do it? 

Overview: Qualitative Data Coding

In this post, we’ll explain qualitative data coding in simple terms. Specifically, we’ll dig into:

  • What exactly qualitative data coding is
  • What different types of coding exist
  • How to code qualitative data (the process)
  • Moving from coding to qualitative analysis
  • Tips and tricks for quality data coding

Qualitative Data Coding: The Basics

What is qualitative data coding?

Let’s start by understanding what a code is. At the simplest level,  a code is a label that describes the content  of a piece of text. For example, in the sentence:

“Pigeons attacked me and stole my sandwich.”

You could use “pigeons” as a code. This code simply describes that the sentence involves pigeons.

So, building onto this,  qualitative data coding is the process of creating and assigning codes to categorise data extracts.   You’ll then use these codes later down the road to derive themes and patterns for your qualitative analysis (for example, thematic analysis ). Coding and analysis can take place simultaneously, but it’s important to note that coding does not necessarily involve identifying themes (depending on which textbook you’re reading, of course). Instead, it generally refers to the process of  labelling and grouping similar types of data  to make generating themes and analysing the data more manageable. 

Makes sense? Great. But why should you bother with coding at all? Why not just look for themes from the outset? Well, coding is a way of making sure your  data is valid . In other words, it helps ensure that your  analysis is undertaken systematically  and that other researchers can review it (in the world of research, we call this transparency). In other words, good coding is the foundation of high-quality analysis.

Definition of qualitative coding

What are the different types of coding?

Now that we’ve got a plain-language definition of coding on the table, the next step is to understand what overarching types of coding exist – in other words, coding approaches . Let’s start with the two main approaches, inductive and deductive .

With deductive coding, you, as the researcher, begin with a set of  pre-established codes  and apply them to your data set (for example, a set of interview transcripts). Inductive coding on the other hand, works in reverse, as you create the set of codes based on the data itself – in other words, the codes emerge from the data. Let’s take a closer look at both.

Deductive coding 101

With deductive coding, we make use of pre-established codes, which are developed before you interact with the present data. This usually involves drawing up a set of  codes based on a research question or previous research . You could also use a code set from the codebook of a previous study.

For example, if you were studying the eating habits of college students, you might have a research question along the lines of 

“What foods do college students eat the most?”

As a result of this research question, you might develop a code set that includes codes such as “sushi”, “pizza”, and “burgers”.  

Deductive coding allows you to approach your analysis with a very tightly focused lens and quickly identify relevant data . Of course, the downside is that you could miss out on some very valuable insights as a result of this tight, predetermined focus. 

Deductive coding of data

Inductive coding 101 

But what about inductive coding? As we touched on earlier, this type of coding involves jumping right into the data and then developing the codes  based on what you find  within the data. 

For example, if you were to analyse a set of open-ended interviews , you wouldn’t necessarily know which direction the conversation would flow. If a conversation begins with a discussion of cats, it may go on to include other animals too, and so you’d add these codes as you progress with your analysis. Simply put, with inductive coding, you “go with the flow” of the data.

Inductive coding is great when you’re researching something that isn’t yet well understood because the coding derived from the data helps you explore the subject. Therefore, this type of coding is usually used when researchers want to investigate new ideas or concepts , or when they want to create new theories. 

Inductive coding definition

A little bit of both… hybrid coding approaches

If you’ve got a set of codes you’ve derived from a research topic, literature review or a previous study (i.e. a deductive approach), but you still don’t have a rich enough set to capture the depth of your qualitative data, you can  combine deductive and inductive  methods – this is called a  hybrid  coding approach. 

To adopt a hybrid approach, you’ll begin your analysis with a set of a priori codes (deductive) and then add new codes (inductive) as you work your way through the data. Essentially, the hybrid coding approach provides the best of both worlds, which is why it’s pretty common to see this in research.

Need a helping hand?

cross coding qualitative research

How to code qualitative data

Now that we’ve looked at the main approaches to coding, the next question you’re probably asking is “how do I actually do it?”. Let’s take a look at the  coding process , step by step.

Both inductive and deductive methods of coding typically occur in two stages:  initial coding  and  line by line coding . 

In the initial coding stage, the objective is to get a general overview of the data by reading through and understanding it. If you’re using an inductive approach, this is also where you’ll develop an initial set of codes. Then, in the second stage (line by line coding), you’ll delve deeper into the data and (re)organise it according to (potentially new) codes. 

Step 1 – Initial coding

The first step of the coding process is to identify  the essence  of the text and code it accordingly. While there are various qualitative analysis software packages available, you can just as easily code textual data using Microsoft Word’s “comments” feature. 

Let’s take a look at a practical example of coding. Assume you had the following interview data from two interviewees:

What pets do you have?

I have an alpaca and three dogs.

Only one alpaca? They can die of loneliness if they don’t have a friend.

I didn’t know that! I’ll just have to get five more. 

I have twenty-three bunnies. I initially only had two, I’m not sure what happened. 

In the initial stage of coding, you could assign the code of “pets” or “animals”. These are just initial,  fairly broad codes  that you can (and will) develop and refine later. In the initial stage, broad, rough codes are fine – they’re just a starting point which you will build onto in the second stage. 

Qualitative Coding By Experts

How to decide which codes to use

But how exactly do you decide what codes to use when there are many ways to read and interpret any given sentence? Well, there are a few different approaches you can adopt. The  main approaches  to initial coding include:

  • In vivo coding 

Process coding

  • Open coding

Descriptive coding

Structural coding.

  • Value coding

Let’s take a look at each of these:

In vivo coding

When you use in vivo coding , you make use of a  participants’ own words , rather than your interpretation of the data. In other words, you use direct quotes from participants as your codes. By doing this, you’ll avoid trying to infer meaning, rather staying as close to the original phrases and words as possible. 

In vivo coding is particularly useful when your data are derived from participants who speak different languages or come from different cultures. In these cases, it’s often difficult to accurately infer meaning due to linguistic or cultural differences. 

For example, English speakers typically view the future as in front of them and the past as behind them. However, this isn’t the same in all cultures. Speakers of Aymara view the past as in front of them and the future as behind them. Why? Because the future is unknown, so it must be out of sight (or behind us). They know what happened in the past, so their perspective is that it’s positioned in front of them, where they can “see” it. 

In a scenario like this one, it’s not possible to derive the reason for viewing the past as in front and the future as behind without knowing the Aymara culture’s perception of time. Therefore, in vivo coding is particularly useful, as it avoids interpretation errors.

Next up, there’s process coding , which makes use of  action-based codes . Action-based codes are codes that indicate a movement or procedure. These actions are often indicated by gerunds (words ending in “-ing”) – for example, running, jumping or singing.

Process coding is useful as it allows you to code parts of data that aren’t necessarily spoken, but that are still imperative to understanding the meaning of the texts. 

An example here would be if a participant were to say something like, “I have no idea where she is”. A sentence like this can be interpreted in many different ways depending on the context and movements of the participant. The participant could shrug their shoulders, which would indicate that they genuinely don’t know where the girl is; however, they could also wink, showing that they do actually know where the girl is. 

Simply put, process coding is useful as it allows you to, in a concise manner, identify the main occurrences in a set of data and provide a dynamic account of events. For example, you may have action codes such as, “describing a panda”, “singing a song about bananas”, or “arguing with a relative”.

cross coding qualitative research

Descriptive coding aims to summarise extracts by using a  single word or noun  that encapsulates the general idea of the data. These words will typically describe the data in a highly condensed manner, which allows the researcher to quickly refer to the content. 

Descriptive coding is very useful when dealing with data that appear in forms other than traditional text – i.e. video clips, sound recordings or images. For example, a descriptive code could be “food” when coding a video clip that involves a group of people discussing what they ate throughout the day, or “cooking” when coding an image showing the steps of a recipe. 

Structural coding involves labelling and describing  specific structural attributes  of the data. Generally, it includes coding according to answers to the questions of “ who ”, “ what ”, “ where ”, and “ how ”, rather than the actual topics expressed in the data. This type of coding is useful when you want to access segments of data quickly, and it can help tremendously when you’re dealing with large data sets. 

For example, if you were coding a collection of theses or dissertations (which would be quite a large data set), structural coding could be useful as you could code according to different sections within each of these documents – i.e. according to the standard  dissertation structure . What-centric labels such as “hypothesis”, “literature review”, and “methodology” would help you to efficiently refer to sections and navigate without having to work through sections of data all over again. 

Structural coding is also useful for data from open-ended surveys. This data may initially be difficult to code as they lack the set structure of other forms of data (such as an interview with a strict set of questions to be answered). In this case, it would useful to code sections of data that answer certain questions such as “who?”, “what?”, “where?” and “how?”.

Let’s take a look at a practical example. If we were to send out a survey asking people about their dogs, we may end up with a (highly condensed) response such as the following: 

Bella is my best friend. When I’m at home I like to sit on the floor with her and roll her ball across the carpet for her to fetch and bring back to me. I love my dog.

In this set, we could code  Bella  as “who”,  dog  as “what”,  home  and  floor  as “where”, and  roll her ball  as “how”. 

Values coding

Finally, values coding involves coding that relates to the  participant’s worldviews . Typically, this type of coding focuses on excerpts that reflect the values, attitudes, and beliefs of the participants. Values coding is therefore very useful for research exploring cultural values and intrapersonal and experiences and actions.   

To recap, the aim of initial coding is to understand and  familiarise yourself with your data , to  develop an initial code set  (if you’re taking an inductive approach) and to take the first shot at  coding your data . The coding approaches above allow you to arrange your data so that it’s easier to navigate during the next stage, line by line coding (we’ll get to this soon). 

While these approaches can all be used individually, it’s important to remember that it’s possible, and potentially beneficial, to  combine them . For example, when conducting initial coding with interviews, you could begin by using structural coding to indicate who speaks when. Then, as a next step, you could apply descriptive coding so that you can navigate to, and between, conversation topics easily. You can check out some examples of various techniques here .

Step 2 – Line by line coding

Once you’ve got an overall idea of our data, are comfortable navigating it and have applied some initial codes, you can move on to line by line coding. Line by line coding is pretty much exactly what it sounds like – reviewing your data, line by line,  digging deeper  and assigning additional codes to each line. 

With line-by-line coding, the objective is to pay close attention to your data to  add detail  to your codes. For example, if you have a discussion of beverages and you previously just coded this as “beverages”, you could now go deeper and code more specifically, such as “coffee”, “tea”, and “orange juice”. The aim here is to scratch below the surface. This is the time to get detailed and specific so as to capture as much richness from the data as possible. 

In the line-by-line coding process, it’s useful to  code everything  in your data, even if you don’t think you’re going to use it (you may just end up needing it!). As you go through this process, your coding will become more thorough and detailed, and you’ll have a much better understanding of your data as a result of this, which will be incredibly valuable in the analysis phase.

Line-by-line coding explanation

Moving from coding to analysis

Once you’ve completed your initial coding and line by line coding, the next step is to  start your analysis . Of course, the coding process itself will get you in “analysis mode” and you’ll probably already have some insights and ideas as a result of it, so you should always keep notes of your thoughts as you work through the coding.  

When it comes to qualitative data analysis, there are  many different types of analyses  (we discuss some of the  most popular ones here ) and the type of analysis you adopt will depend heavily on your research aims, objectives and questions . Therefore, we’re not going to go down that rabbit hole here, but we’ll cover the important first steps that build the bridge from qualitative data coding to qualitative analysis.

When starting to think about your analysis, it’s useful to  ask yourself  the following questions to get the wheels turning:

  • What actions are shown in the data? 
  • What are the aims of these interactions and excerpts? What are the participants potentially trying to achieve?
  • How do participants interpret what is happening, and how do they speak about it? What does their language reveal?
  • What are the assumptions made by the participants? 
  • What are the participants doing? What is going on? 
  • Why do I want to learn about this? What am I trying to find out? 
  • Why did I include this particular excerpt? What does it represent and how?

The type of qualitative analysis you adopt will depend heavily on your research aims, objectives and research questions.

Code categorisation

Categorisation is simply the process of reviewing everything you’ve coded and then  creating code categories  that can be used to guide your future analysis. In other words, it’s about creating categories for your code set. Let’s take a look at a practical example.

If you were discussing different types of animals, your initial codes may be “dogs”, “llamas”, and “lions”. In the process of categorisation, you could label (categorise) these three animals as “mammals”, whereas you could categorise “flies”, “crickets”, and “beetles” as “insects”. By creating these code categories, you will be making your data more organised, as well as enriching it so that you can see new connections between different groups of codes. 

Theme identification

From the coding and categorisation processes, you’ll naturally start noticing themes. Therefore, the logical next step is to  identify and clearly articulate the themes  in your data set. When you determine themes, you’ll take what you’ve learned from the coding and categorisation and group it all together to develop themes. This is the part of the coding process where you’ll try to draw meaning from your data, and start to  produce a narrative . The nature of this narrative depends on your research aims and objectives, as well as your research questions (sounds familiar?) and the  qualitative data analysis method  you’ve chosen, so keep these factors front of mind as you scan for themes. 

Themes help you develop a narrative in your qualitative analysis

Tips & tricks for quality coding

Before we wrap up, let’s quickly look at some general advice, tips and suggestions to ensure your qualitative data coding is top-notch.

  • Before you begin coding,  plan out the steps  you will take and the coding approach and technique(s) you will follow to avoid inconsistencies. 
  • When adopting deductive coding, it’s useful to  use a codebook  from the start of the coding process. This will keep your work organised and will ensure that you don’t forget any of your codes. 
  • Whether you’re adopting an inductive or deductive approach,  keep track of the meanings  of your codes and remember to revisit these as you go along.
  • Avoid using synonyms  for codes that are similar, if not the same. This will allow you to have a more uniform and accurate coded dataset and will also help you to not get overwhelmed by your data.
  • While coding, make sure that you  remind yourself of your aims  and coding method. This will help you to  avoid  directional drift , which happens when coding is not kept consistent. 
  • If you are working in a team, make sure that everyone has  been trained and understands  how codes need to be assigned. 

32 Comments

Finan Sabaroche

I appreciated the valuable information provided to accomplish the various stages of the inductive and inductive coding process. However, I would have been extremely satisfied to be appraised of the SPECIFIC STEPS to follow for: 1. Deductive coding related to the phenomenon and its features to generate the codes, categories, and themes. 2. Inductive coding related to using (a) Initial (b) Axial, and (c) Thematic procedures using transcribe data from the research questions

CD Fernando

Thank you so much for this. Very clear and simplified discussion about qualitative data coding.

Kelvin

This is what I want and the way I wanted it. Thank you very much.

Prasad

All of the information’s are valuable and helpful. Thank for you giving helpful information’s. Can do some article about alternative methods for continue researches during the pandemics. It is more beneficial for those struggling to continue their researchers.

Bahiru Haimanot

Thank you for your information on coding qualitative data, this is a very important point to be known, really thank you very much.

Christine Wasanga

Very useful article. Clear, articulate and easy to understand. Thanks

Andrew Wambua

This is very useful. You have simplified it the way I wanted it to be! Thanks

elaine clarke

Thank you so very much for explaining, this is quite helpful!

Enis

hello, great article! well written and easy to understand. Can you provide some of the sources in this article used for further reading purposes?

Kay Sieh Smith

You guys are doing a great job out there . I will not realize how many students you help through your articles and post on a daily basis. I have benefited a lot from your work. this is remarkable.

Wassihun Gebreegizaber Woldesenbet

Wonderful one thank you so much.

Thapelo Mateisi

Hello, I am doing qualitative research, please assist with example of coding format.

A. Grieme

This is an invaluable website! Thank you so very much!

Pam

Well explained and easy to follow the presentation. A big thumbs up to you. Greatly appreciate the effort 👏👏👏👏

Ceylan

Thank you for this clear article with examples

JOHNSON Padiyara

Thank you for the detailed explanation. I appreciate your great effort. Congrats!

Kwame Aboagye

Ahhhhhhhhhh! You just killed me with your explanation. Crystal clear. Two Cheers!

Stacy Ellis

D0 you have primary references that was used when creating this? If so, can you share them?

Ifeanyi Idam

Being a complete novice to the field of qualitative data analysis, your indepth analysis of the process of thematic analysis has given me better insight. Thank you so much.

Takalani Nemaungani

Excellent summary

Temesgen Yadeta Dibaba

Thank you so much for your precise and very helpful information about coding in qualitative data.

Ruby Gabor

Thanks a lot to this helpful information. You cleared the fog in my brain.

Derek Jansen

Glad to hear that!

Rosemary

This has been very helpful. I am excited and grateful.

Robert Siwer

I still don’t understand the coding and categorizing of qualitative research, please give an example on my research base on the state of government education infrastructure environment in PNG

Uvara Isaac Ude

Wahho, this is amazing and very educational to have come across this site.. from a little search to a wide discovery of knowledge.

Thanks I really appreciate this.

Jennifer Maslin

Thank you so much! Very grateful.

Vanassa Robinson

This was truly helpful. I have been so lost, and this simplified the process for me.

Julita Maradzika

Just at the right time when I needed to distinguish between inductive and

deductive data analysis of my Focus group discussion results very helpful

Sergio D. Mahinay, Jr.

Very useful across disciplines and at all levels. Thanks…

Estrada

Hello, Thank you for sharing your knowledge on us.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Guide to Qualitative Data Coding: Best Analysis Methods

Guide to Qualitative Data Coding: Best Analysis Methods

Qualitative data is where data becomes insights, and insights drive meaningful action. It's what enables qualitative data to shine, bringing context to life from customers eager to share their honest thoughts about your brand. 

But without a plan to make sense of qualitative insights, they're at risk of collecting digital dust. That's where qualitative coding comes in.

In this guide, we're going to walk through how to do qualitative data analysis, so you can turn your qualitative data into the goldmine that it is – and then some.

Below, we'll explore:

Various qualitative data analysis methods

Types of qualitative data sources, and effective strategies for data collection

A walkthrough of the best qualitative coding methods by research goal

Let's dive in!

What is qualitative data coding?

Qualitative data coding is the process of analyzing and categorizing qualitative (non-numerical) data, such as interview transcripts, open-ended survey responses, or observational notes to arrive at patterns and themes.

Coding involves assigning descriptive labels or "codes" to segments of the qualitative data, to summarize and condense the information. Coding can be done inductively, where the codes emerge from the data itself, or deductively, where the researcher starts with a pre-determined set of codes based on existing theories or frameworks.

What's the benefit of qualitative data analysis?

Qualitative data dives into the intricacies of human experiences that quantitative data often overlooks. Qualitative research typically provides a deeper, more nuanced understanding of human behaviour, experiences, perceptions, and motivations. It can reveal the "why" and "how" behind the "what" that quantitative data shows.

Qualitative research is generally more flexible, and can be adapted to explore new or unexpected insights that emerge during the research process. It's a great tool that complements, and enhances quantitative research.

Qualitative data types

Types of qualitative data

Qualitative data comes in various forms. Each offers unique insights into different aspects of the human experience. Understanding the different types of qualitative data is the key to designing effective research methodologies, and strategies for your team to code qualitative data effectively.

Let’s explore some common types of qualitative data:

1. Textual Data

What it is: Written or verbal data in the form of transcripts, interviews, focus group discussions, open-ended survey responses, social media comments, emails, or customer reviews.

Advantages: Provides rich contextual information, sentiments, opinions, and narratives from direct interactions with customers or stakeholders.

2. Visual Data

What it is: Images, videos, diagrams, infographics, or any visual representation that captures non-verbal cues, gestures, emotions, or environmental contexts.

Advantages: Complements textual data by adding visual context and expressions that enhance the depth of qualitative insights.

3. Audio Data

What it is: Recordings of interviews, phone calls, focus group sessions, or any audio-based interactions.

Advantages: Captures tonal variations, emotions, and nuances in verbal communications, providing additional layers of understanding.

4. Observational Data

What it is: Direct observations of behaviours, interactions, or events in real-time settings such as ethnographic studies, field observations, or usability testing.

Advantages: Offers firsthand insights into natural behaviours, decision-making processes, and contextual factors influencing experiences.

5. Contextual Data

What it is: Information about the context, environment, culture, demographics, or situational factors influencing behaviours or perceptions.

Usage: Helps in interpreting qualitative findings within relevant contexts, identifying cultural nuances, and understanding environmental influences.

6. Metadata

Description: Additional data accompanying qualitative sources, such as timestamps, location information, participant demographics, or categorizations.

Advantages: Provides context, aids in organizing and filtering data, and supports comparative analysis across different segments or timeframes.

7. Historical Data

Description: Past records, archival materials, historical documents, or retrospective accounts relevant to the research topic.

Advantages: Offers historical perspectives, longitudinal insights, and continuity in understanding changes, trends, or patterns over time.

8. Digital Data

Description: Data generated from digital interactions, online platforms, websites, social media, digital surveys, or user-generated content.

Advantages: Reflects digital behaviours, user experiences, online sentiments, and interactions in virtual environments.

9. Multi-modal Data

Description: Integration of multiple data types such as textual, visual, audio, and contextual data sources for comprehensive analysis.

Advantages: Enables triangulation of findings, validation of insights across different modalities, and holistic understanding of complex phenomena.

10. Secondary Data

Description: Existing data sources, literature reviews, case studies, or research studies conducted by other researchers or organizations.

Advantages: Supplements primary qualitative data, provides comparative insights, validates findings, or offers historical context to research outcomes.

Understanding when, and how, to use each data type will elevate your overall research efforts. Thanks to the diversity of the data, you can lean on a handful of different forms to arrive at meaningful insights. This flexibility enables you to design robust data strategies that are closely aligned with research objectives.

But it also means, that you'll need a qualitative coding system to analyze the data consistently, to get the most out of your diverse findings.

Collect qualitative data

How to collect qualitative data

Coding qualitative data effectively starts with having the right data to begin with. Here are a few common sources you can turn to to gather qualitative data for your research project:

Interviews: Conducting structured, semi-structured, or unstructured interviews with individuals or groups is a great way to start. With these you can gather in-depth insights about experiences, opinions, and perspectives. Interviews can be face-to-face, over the phone, or done with video calls.

Focus Groups: This involves bringing together a small group of participants to engage in discussions facilitated by a moderator. Focus groups allow researchers to explore group dynamics, shared experiences, and diverse viewpoints.

Surveys: Design open-ended survey questions to capture qualitative responses from respondents. Surveys can be distributed through email, online platforms, or in-person interviews to gather large volumes of qualitative data.

Observations: Arranging sessions to systematically observe and record behaviours in a particular setting is a great qualitative data source. Observations can be participant-based (the researcher actively participates) or non-participant (the researcher observes without interference).

Document Analysis: You can review existing documents, texts, artifacts, or media sources to extract qualitative insights from them. Documents could be written reports, social media posts, customer reviews, historical records, among other things.

Diaries or Journals: Ask participants to maintain personal diaries or journals to record their thoughts, experiences, and reflections over a specific period. Diaries provide rich, real-time qualitative data about daily life and emotions.

Ethnography: Immersing yourself in participants' natural environments or cultural contexts to observe social behaviours or norms. Ethnographic studies aim to gain deep cultural insights from a particular group.

Each insight collection method offers unique advantages and challenges when it comes to your research objectives.

The key in picking your method, is to align data types and collection with your research goals as much as possible to ensure the data is rich, and will remain relevant to your research questions.

What are the different types of coding?

Before we dive into the specifics around different methods to code qualitative data, let's start with the most basic understanding of research approaches. In general, there are two: inductive and deductive coding.

Inductive coding is ideal for exploratory research, when the goal is to develop new theories, ideas or concepts. It allows the data to speak for itself.

Deductive coding, on the other hand, is better suited when the researcher has a pre-determined structure or framework they need to fit the data into, such as in program evaluation or content analysis studies.

The key difference between these two approaches is that with deductive coding, you start with a framework of pre-established codes, which you use to label all the data that comes through your research project.

Coding qualitative data

Deductive coding example

Say a researcher wanted to determine the answer to the research question –– what are the main factors that influence customer satisfaction with an e-commerce website?

Using deductive coding, you would develop a set of pre-determined codes based on existing theories and research on customer satisfaction with e-commerce websites. They might include, "website usability," "pricing," "product selection," or "customer service."

The researcher then collects the qualitative data, like customer interviews or open-ended survey responses about their experiences using the e-commerce website. The pre-defined codes provide a guide with which you would systematically categorize the data according to the most relevant category.

Once all the data is coded, you can analyze the frequency and relationships between the different codes to identify the key factors influencing customer satisfaction. You may find, for example, that website usability and shipping/delivery are the most prominent factors driving satisfaction.

This deductive approach helps in testing existing theories and frameworks around e-commerce customer satisfaction. It provides a structured way to analyze the data, and answer the research question.

Inductive coding example

Inductive coding example

Inductive coding operates with a different mindset when it comes to qualitative data analysis. Instead of starting with a pre-defined set of codes, the researcher reads through interview transcripts and begins to identify emerging themes and patterns in the data. This is distinct from the 'bottom-up' deductive approach.

Let's say your research question is –– what are the key factors that influence job satisfaction among software engineers?

With this approach, you could collect your qualitative data through interviews with software engineers to hear about their experiences and perceptions about job satisfaction. As you analyze your qualitative data, you start to identify pattern and themes from the data itself, capturing them into codes. These might be "work-life balance," "career development," or "team culture".

With inductive coding, the codes you use are grounded in the actual language and perspectives of the participants. The advantage here is that the data guides the analysis, rather than trying to fit the data into pre-existing assumptions or frameworks. This typically leads to better research outcomes, as real-world experiences and perspectives of the participants ground the insights.

Qualitative data coding method

Qualitative coding methods

Now that we know the main ways of assigning codes, let's dive a bit deeper to understand more granular methods.

When it comes to choosing a method to structure and analyze your data, your first criteria should be to align the method with your research goals. It's also worth noting that using multiple complementary methods (triangulation) can provide more robust analysis.

In this section, let's explore a range of qualitative coding methods. Each offers unique perspectives to help you unlock the most meaning from your qualitative data.

Thematic Analysis Coding

Thematic analysis coding is your go-to method when you want to uncover recurring patterns and themes across your qualitative data.

Imagine you're knee-deep in interview transcripts from customer feedback sessions. You start noticing phrases like "user-friendly interface" or "quick issue resolution" popping up frequently. These phrases are your themes. By coding them under relevant categories like "Ease of Use" or "Efficient Support," you're essentially organizing your data in a way that makes sense. This method works wonders when you have a large volume of qualitative data and need to distill it into manageable themes for deeper analysis.

Pattern Coding

Pattern coding is all about spotting and grouping similarly coded excerpts under one overarching code to describe a pattern.

Let's say you're analyzing customer reviews of a new mobile app. You notice phrases like "love the design but slow loading times" or "great features, needs smoother navigation." These phrases share a common thread—the balance between design and functionality. By creating a pattern code like "Design-Functionality Balance," you capture the essence of these comments without losing their individual insights. This method helps you identify trends or issues that might go unnoticed otherwise.

Focused/Selective Coding

Focused or selective coding comes into play when you've completed an initial round of "open coding" and need to refine your codes further.

Picture yourself swimming in a sea of codes derived from open-ended survey responses. You've identified several themes but want to narrow them down to the most relevant ones. Focused coding helps you create a finalized set of codes and categories based on your research objectives. This method is like streamlining your focus, ensuring that every code you use aligns directly with your study's purpose.

Axial Coding

Axial coding is your tool for connecting the dots between codes or categories, unveiling relationships and links within your data.

Imagine you've coded various customer sentiments about a product launch. Some codes relate to pricing satisfaction, while others focus on feature preferences. Axial coding helps you see how these codes intersect—are customers who like certain features more forgiving about pricing, or vice versa? This method dives deep into understanding the interconnectedness of different aspects of your qualitative data.

Theoretical Coding

Theoretical coding lets you build a conceptual framework by structuring codes and categories around emerging theories or concepts.

Imagine you're studying employee satisfaction in a company undergoing digital transformation. Your codes reveal sentiments about adapting to new tools, workload changes, and management support. Theoretical coding helps you map these codes to existing theories like Herzberg's Two-Factor Theory or Maslow's Hierarchy of Needs, adding layers of theoretical understanding to your qualitative analysis.

Elaborative Coding

Elaborative coding is about applying previous research theories or frameworks to your current data and observing how they align or differ.

Let's say your study on customer loyalty echoes findings from established loyalty models like the Loyalty Pyramid. Elaborative coding helps you validate these connections or identify nuances that existing models might overlook. It's like having a conversation between your data and established theories, enriching your analysis with broader industry perspectives.

Longitudinal Coding

Longitudinal coding is crucial when you're tracking changes or developments in qualitative data over time.

Imagine you're studying consumer perceptions of a brand across multiple years. Longitudinal coding allows you to compare sentiments, identify shifts in customer preferences, and track the impact of marketing campaigns or product changes. This method provides a dynamic view of your data's evolution, helping you stay current and adaptive in your research insights.

qualitative data coding methods

In Vivo Coding

In vivo coding involves summarizing passages into single words or phrases directly extracted from the data itself.

Say you're analyzing focus group transcripts about online shopping experiences. Participants mention phrases like "cart abandonment blues" or "scroll fatigue." In vivo coding captures the essence of these experiences using participants' own language. It's about letting your data speak for itself, preserving the authenticity and nuances of participants' voices.

Process Coding

Process coding uses gerund codes to describe actions or processes within your qualitative data.

For example, let's say you're studying customer support interactions. Your codes highlight actions like "resolving complaints," "escalating issues," or "navigating knowledge bases." Process coding helps you dissect complex interactions into actionable steps , making it easier to analyze workflows, identify bottlenecks, or pinpoint areas for improvement.

Open Coding

Open coding kicks off your qualitative analysis journey by allowing loose and tentative coding to identify emerging concepts or themes.

Imagine you're starting interviews for a market research project. Open qualitative coding lets you tag responses with codes like "price concerns," "product satisfaction," or "brand loyalty." It's like casting a wide net to capture diverse customer insights , setting the stage for more focused coding and deeper analysis down the road.

Qualitative data coding tools

Qualitative data software tools

When it comes to qualitative research and doing qualitative data analysis , having the right tools can make all the difference.

There are a plethora of qualitative data analysis software available to help make interpretation a lot easier –– using both deductive and inductive coding techniques. The choice of your tools depends on the specific needs of your research project, your familiarity to navigate it, and the level of complexity required. Keep in mind that many researchers find it beneficial to use a combination of tools at different stages of the research process.

Below are some factors to consider when deciding on a tool:

Ability to code and categorize data (both inductively and deductively)

Tools for identifying themes, patterns, and relationships in the data

Visualization capabilities to help explore and present findings

Support for diverse data types (text, audio, video, images)

Collaboration and reporting capabilities

Ease of use and intuitive interface

Qualitative data coding is not just about assigning labels, it's about uncovering stories, emotions, and valuable insights hidden within your qualitative research data. By using a blend of the coding methods such as thematic analysis, pattern coding, and in vivo coding, your can get to the heart of your customers' narrative, and unearth ways to serve them better.

Ready to unlock the full potential of your qualitative research journey? Get the tools, techniques, and strategies you need with Kapiche –– eliminate costly manual coding, and achieve meaningful, inductive insights fast. Check out a demo of Kapiche today to explore how it can help. 

You might also like

Customer Journey Stages_ How to Map and Optimize Each Phase

A guide to coding qualitative research data

Last updated

12 February 2023

Reviewed by

Short on time? Get an AI generated summary of this article instead

Each time you ask open-ended and free-text questions, you'll end up with numerous free-text responses. When your qualitative data piles up, how do you sift through it to determine what customers value? And how do you turn all the gathered texts into quantifiable and actionable information related to your user's expectations and needs?

Qualitative data can offer significant insights into respondents’ attitudes and behavior. But to distill large volumes of text / conversational data into clear and insightful results can be daunting. One way to resolve this is through qualitative research coding.

Streamline data coding

Use global data tagging systems in Dovetail so everyone analyzing research is speaking the same language

  • What is coding in qualitative research?

This is the system of classifying and arranging qualitative data . Coding in qualitative research involves separating a phrase or word and tagging it with a code. The code describes a data group and separates the information into defined categories or themes. Using this system, researchers can find and sort related content.

They can also combine categorized data with other coded data sets for analysis, or analyze it separately. The primary goal of coding qualitative data is to change data into a consistent format in support of research and reporting.

A code can be a phrase or a word that depicts an idea or recurring theme in the data. The code’s label must be intuitive and encapsulate the essence of the researcher's observations or participants' responses. You can generate these codes using two approaches to coding qualitative data: manual coding and automated coding.

  • Why is it important to code qualitative data?

By coding qualitative data, it's easier to identify consistency and scale within a set of individual responses. Assigning codes to phrases and words within feedback helps capture what the feedback entails. That way, you can better analyze and   understand the outcome of the entire survey.

Researchers use coding and other qualitative data analysis procedures to make data-driven decisions according to customer responses. Coding in customer feedback will help you assess natural themes in the customers’ language. With this, it's easy to interpret and analyze customer satisfaction .

  • How do inductive and deductive approaches to qualitative coding work?

Before you start qualitative research coding, you must decide whether you're starting with some predefined code frames, within which the data will be sorted (deductive approach). Or, you may plan to develop and evolve the codes while reviewing the qualitative data generated by the research (inductive approach). A combination of both approaches is also possible.

In most instances, a combined approach will be best. For example, researchers will have some predefined codes/themes they expect to find in the data, but will allow for a degree of discovery in the data where new themes and codes come to light.

Inductive coding

This is an exploratory method in which new data codes and themes are generated by the review of qualitative data. It initiates and generates code according to the source of the data itself. It's ideal for investigative research, in which you devise a new idea, theory, or concept. 

Inductive coding is otherwise called open coding. There's no predefined code-frame within inductive coding, as all codes are generated by reviewing the raw qualitative data.

If you're adding a new code, changing a code descriptor, or dividing an existing code in half, ensure you review the wider code frame to determine whether this alteration will impact other feedback codes.  Failure to do this may lead to similar responses at various points in the qualitative data,  generating different codes while containing similar themes or insights.

Inductive coding is more thorough and takes longer than deductive coding, but offers a more unbiased and comprehensive overview of the themes within your data.

Deductive coding

This is a hierarchical approach to coding. In this method, you develop a codebook using your initial code frames. These frames may depend on an ongoing research theory or questions. Go over the data once again and filter data to different codes. 

After generating your qualitative data, your codes must be a match for the code frame you began with. Program evaluation research could use this coding approach.

Inductive and deductive approaches

Research studies usually blend both inductive and deductive coding approaches. For instance, you may use a deductive approach for your initial set of code sets, and later use an inductive approach to generate fresh codes and recalibrate them while you review and analyze your data.

  • What are the practical steps for coding qualitative data?

You can code qualitative data in the following ways:

1. Conduct your first-round pass at coding qualitative data

You need to review your data and assign codes to different pieces in this step. You don't have to generate the right codes since you will iterate and evolve them ahead of the second-round coding review.

Let's look at examples of the coding methods you may use in this step.

Open coding : This involves the distilling down of qualitative data into separate, distinct coded elements.

Descriptive coding : In this method, you create a description that encapsulates the data section’s content. Your code name must be a noun or a term that describes what the qualitative data relates to.

Values coding : This technique categorizes qualitative data that relates to the participant's attitudes, beliefs, and values.

Simultaneous coding : You can apply several codes to a single piece of qualitative data using this approach.

Structural coding : In this method, you can classify different parts of your qualitative data based on a predetermined design to perform additional analysis within the design.

In Vivo coding : Use this as the initial code to represent specific phrases or single words generated via a qualitative interview (i.e., specifically what the respondent said).

Process coding : A process of coding which captures action within data.  Usually, this will be in the form of gerunds ending in “ing” (e.g., running, searching, reviewing).

2. Arrange your qualitative codes into groups and subcodes

You can start organizing codes into groups once you've completed your initial round of qualitative data coding. There are several ways to arrange these groups. 

You can put together codes related to one another or address the same subjects or broad concepts, under each category. Continue working with these groups and rearranging the codes until you develop a framework that aligns with your analysis.

3. Conduct more rounds of qualitative coding

Conduct more iterations of qualitative data coding to review the codes and groups you've already established. You can change the names and codes, combine codes, and re-group the work you've already done during this phase. 

In contrast, the initial attempt at data coding may have been hasty and haphazard. But these coding rounds focus on re-analyzing, identifying patterns, and drawing closer to creating concepts and ideas.

Below are a few techniques for qualitative data coding that are often applied in second-round coding.

Pattern coding : To describe a pattern, you join snippets of data, similarly classified under a single umbrella code.

Thematic analysis coding : When examining qualitative data, this method helps to identify patterns or themes.

Selective coding/focused coding : You can generate finished code sets and groups using your first pass of coding.

Theoretical coding : By classifying and arranging codes, theoretical coding allows you to create a theoretical framework's hypothesis. You develop a theory using the codes and groups that have been generated from the qualitative data.

Content analysis coding : This starts with an existing theory or framework and uses qualitative data to either support or expand upon it.

Axial coding : Axial coding allows you to link different codes or groups together. You're looking for connections and linkages between the information you discovered in earlier coding iterations.

Longitudinal coding : In this method, by organizing and systematizing your existing qualitative codes and categories, it is possible to monitor and measure them over time.

Elaborative coding : This involves applying a hypothesis from past research and examining how your present codes and groups relate to it.

4. Integrate codes and groups into your concluding narrative

When you finish going through several rounds of qualitative data coding and applying different forms of coding, use the generated codes and groups to build your final conclusions. The final result of your study could be a collection of findings, theory, or a description, depending on the goal of your study.

Start outlining your hypothesis , observations , and story while citing the codes and groups that served as its foundation. Create your final study results by structuring this data.

  • What are the two methods of coding qualitative data?

You can carry out data coding in two ways: automatic and manual. Manual coding involves reading over each comment and manually assigning labels. You'll need to decide if you're using inductive or deductive coding.

Automatic qualitative data analysis uses a branch of computer science known as Natural Language Processing to transform text-based data into a format that computers can comprehend and assess. It's a cutting-edge area of artificial intelligence and machine learning that has the potential to alter how research and insight is designed and delivered.

Although automatic coding is faster than human coding, manual coding still has an edge due to human oversight and limitations in terms of computer power and analysis.

  • What are the advantages of qualitative research coding?

Here are the benefits of qualitative research coding:

Boosts validity : gives your data structure and organization to be more certain the conclusions you are drawing from it are valid

Reduces bias : minimizes interpretation biases by forcing the researcher to undertake a systematic review and analysis of the data 

Represents participants well : ensures your analysis reflects the views and beliefs of your participant pool and prevents you from overrepresenting the views of any individual or group

Fosters transparency : allows for a logical and systematic assessment of your study by other academics

  • What are the challenges of qualitative research coding?

It would be best to consider theoretical and practical limitations while analyzing and interpreting data. Here are the challenges of qualitative research coding:

Labor-intensive: While you can use software for large-scale text management and recording, data analysis is often verified or completed manually.

Lack of reliability: Qualitative research is often criticized due to a lack of transparency and standardization in the coding and analysis process, being subject to a collection of researcher bias. 

Limited generalizability : Detailed information on specific contexts is often gathered using small samples. Drawing generalizable findings is challenging even with well-constructed analysis processes as data may need to be more widely gathered to be genuinely representative of attitudes and beliefs within larger populations.

Subjectivity : It is challenging to reproduce qualitative research due to researcher bias in data analysis and interpretation. When analyzing data, the researchers make personal value judgments about what is relevant and what is not. Thus, different people may interpret the same data differently.

  • What are the tips for coding qualitative data?

Here are some suggestions for optimizing the value of your qualitative research now that you are familiar with the fundamentals of coding qualitative data.

Keep track of your codes using a codebook or code frame

It can be challenging to recall all your codes offhand as you code more and more data. Keeping track of your codes in a codebook or code frame will keep you organized as you analyze the data. An Excel spreadsheet or word processing document might be your codebook's basic format.

Ensure you track:

The label applied to each code and the time it was first coded or modified

An explanation of the idea or subject matter that the code relates to

Who the original coder is

Any notes on the relationship between the code and other codes in your analysis

Add new codes to your codebook as you code new data, and rearrange categories and themes as necessary.

  • How do you create high-quality codes?

Here are four useful tips to help you create high-quality codes.

1. Cover as many survey responses as possible

The code should be generic enough to aid your analysis while remaining general enough to apply to various comments. For instance, "product" is a general code that can apply to many replies but is also ambiguous. 

Also, the specific statement, "product stops working after using it for 3 hours" is unlikely to apply to many answers. A good compromise might be "poor product quality" or "short product lifespan."

2. Avoid similarities

Having similar codes is acceptable only if they serve different objectives. While "product" and "customer service" differ from each other, "customer support" and "customer service" can be unified into a single code.

3. Take note of the positive and the negative

Establish contrasting codes to track an issue's negative and positive aspects separately. For instance, two codes to identify distinct themes would be "excellent customer service" and "poor customer service."

4. Minimize data—to a point

Try to balance having too many and too few codes in your analysis to make it as useful as possible.

What is the best way to code qualitative data?

Depending on the goal of your research, the procedure of coding qualitative data can vary. But generally, it entails: 

Reading through your data

Assigning codes to selected passages

Carrying out several rounds of coding

Grouping codes into themes

Developing interpretations that result in your final research conclusions 

You can begin by first coding snippets of text or data to summarize or characterize them and then add your interpretative perspective in the second round of coding.

A few techniques are more or less acceptable depending on your study’s goal; there is no right or incorrect way to code a data set.

What is an example of a code in qualitative research?

A code is, at its most basic level, a label specifying how you should read a text. The phrase, "Pigeons assaulted me and took my meal," is an illustration. You can use pigeons as a code word.

Is there coding in qualitative research?

An essential component of qualitative data analysis is coding. Coding aims to give structure to free-form data so one can systematically study it.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 22 August 2024

Last updated: 5 February 2023

Last updated: 16 August 2024

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.

Get started for free

We Trust in Human Precision

20,000+ Professional Language Experts Ready to Help. Expertise in a variety of Niches.

API Solutions

  • API Pricing
  • Cost estimate
  • Customer loyalty program
  • Educational Discount
  • Non-Profit Discount
  • Green Initiative Discount1

Value-Driven Pricing

Unmatched expertise at affordable rates tailored for your needs. Our services empower you to boost your productivity.

PC editors choice

  • Special Discounts
  • Enterprise transcription solutions
  • Enterprise translation solutions
  • Transcription/Caption API
  • AI Transcription Proofreading API

Trusted by Global Leaders

GoTranscript is the chosen service for top media organizations, universities, and Fortune 50 companies.

GoTranscript

One of the Largest Online Transcription and Translation Agencies in the World. Founded in 2005.

Speaker 1: Hey guys, welcome to Grad Coach TV, where we demystify and simplify the oftentimes confusing world of academic research. My name's David, and today I'm chatting to one of our trusted coaches, Alexandra, about five common mistakes students make during their qualitative research analysis. This discussion is based on one of the many, many articles over at the Grad Coach blog. So, if you'd like to learn more about qualitative research analysis, head over to gradcoach.com forward slash blog. Also, if you're looking for a helping hand with your dissertation, thesis or research project, be sure to check out our one-on-one private coaching service, where we hold your hand throughout the research journey, step-by-step. For more information and to book a free consultation, head over to gradcoach.com. Hey, Alexandra, welcome back to the CoachCast. It's really great to have you back on board.

Speaker 2: Hey, David. always a pleasure to be here and happy to talk with you today. So today we are talking about

Speaker 1: five common mistakes students make about qualitative research analysis, and let us just dive into it. The first one that comes up quite frequently is a lack of alignment between the analysis and the golden thread. Alexandra, what am I getting at with this? Yes, so this idea

Speaker 2: of the golden thread, you will hear it in all walks of research, whether it is quantitative, mixed methods and qualitative so really what you want to do and consider for this golden thread are these three fundamental we'll call them puzzle pieces of the research aims the research objectives and the research questions so these are kind of the foundation of your qualitative research study and so how you consider these and you know what you're trying to do and answer and how you're going to do it will then help you determine what methodology you should choose that would be the most appropriate or suitable to answer those questions and this is not particularly easy because there are several different kinds of qualitative methodologies out there but it can have some some positive outcomes or some negative consequences depending on which methodology you choose to answer those aims objectives and questions of your golden thread so that's

Speaker 1: really helpful alexandra maybe you can give us an example or two of where there's alignment or a

Speaker 2: lack of alignment sure so two of the most common methodologies in qualitative research that we see at grad coach or elsewhere are case studies versus grounded theory and so the first thing to keep in mind with any study is that your the methodology that you choose should be the most suitable one to answer those golden thread notions of the aims objectives research questions not the other way around and so for example with the case study the case study should be used if in your golden thread ideas of the aims objectives and research questions you already have some sort of working knowledge of a group or an event and so you're using this case study methodology because it will appropriately answer those foundational aspects of the golden thread on the other hand let's say your research aims or objectives or questions are about something that you really have limited knowledge about or there's scarce research out there and you're wanting to kind of build up a framework or a theory in that case using a methodology like grounded theory would be more suitable. So you can see there with those two examples of case study versus grounded theory, these two methodology should be applied to answer different golden thread foundational aspects.

Speaker 1: That is really helpful, Alexandra. And I know it can seem a little bit overwhelming to think about getting this alignment right. In cases like this, do not necessarily just rely on your own judgment. It can be really helpful to get a friend or someone from your cohort just to take a look through and read of what you are working on. They will be able to help you identify where there is a lack of alignment. For instance, if you ask them to sort of give you the elevator pitch back of what you set out to do, and it is not lining up with your thinking, then maybe it is a good point to sort of identify where those lack of alignments are, and use that to help you sort of address that. But try and do this earlier rather than later. It's definitely going to make your life easier. So our second mistake is making use of a transcription program software without checking the transcripts. Alexandra, why is this such a problem? Yes. So first of all, you know, there

Speaker 2: are programs, an increasing number out there that are cost effective, mostly free, and for the most part accurate things like zoom transcription software otter ai atlas ti and these certainly have a lot of benefits for convenience sake and cost effectiveness however um that's not to say that these programs are perfect because with a lot of ai and other kinds of automated software it does lose that human element that can miss some of the more nuanced or minute pieces of information that are important. So for instance, in my own dissertation research, I had about 100 participants who all verbally reacted to a stimulus. And half of my participants were doing this in English and the other half in French. And each of these were about 30 minutes long, each participant 30 minutes now with qualitative research you know you have to have something to analyze and it's difficult to do that directly from the audio files so what you have to do is transcribe these from audio to text and so i was going through and i was doing these manually myself from about participant 80 i was beyond exhausted and so i decided to use one of these outside services or programs to kind of expedite this, kind of help me. And of course it was convenient. However, when I got the transcripts back, I noticed as I was going through the first few of them, some errors to content, to spelling, different words were showing up where other words had been said actually in the audio files. And as I was going along through the rest of them, I noticed that pretty much all 20 or so of these outside transcribed files had errors. So I ended up having to go back myself regardless and going through them again and fixing them. So this is all to kind of say that even though these programs can be very convenient and cost effective, there are some drawbacks. most of that has to do with kind of content, the words that they miss, spelling, punctuation, grammar, et cetera, et cetera. And you'll oftentimes definitely actually still have to go in and check these for quality and accuracy. This is why it's very important to kind of think about, even though these programs might be convenient, they're never going to replace kind of that human element of being able to really read and understand what's going on, make sure that it matches what was said in the audio files. And so one of the things that you can do if it's not yourself, you should check it yourself, but even go beyond that and ask someone else to check these transcripts for accuracy. Because either if you've used an outside service or program, or if you've done all the transcriptions by yourself, sometimes we miss things. Having someone else, an outside person, an actual person look at these and kind of make sure that they're accurate will not only help you catch potential errors, but in doing so, it kind of promotes the credibility of the transcripts because they're accurate, they're clear, they're actually what was said in the audio files and so sometimes what might be happen if you don't do this having that like human element it can diminish the credibility of the rest of your transcripts if they are accurate because the reader or your marker might say well this one was not accurate so maybe there's some flaws in the other ones as well but beyond that I mean other than the marking your transcriptions this is your this is really your raw data in qualitative analysis and so if you have errors or missing information in your transcripts that were there in the audio files this makes the coding and analysis flawed this puts things in misalignment and as such there's kind of a domino effect of repercussions that can happen if these things aren't transcribed

Speaker 1: accurately. I think that in the same way that in quantitative research your actual data is key to your analysis, it is the same for qualitative. So we really want to make sure we are doing due diligence to assess the quality of the work. That is not to say you cannot use services to help out. It will depend on your type of research as well. For instance, from a business perspective, you might be less interested in the specific nuance of how someone presented an idea compared to a language study. So in cases like that, there is a bit of a cost benefit to consider, but regardless of whether you are using a service or not, getting a second run through of it can be super helpful. And there are a range of services out there that you can use, both in terms of software or human run services. If you are interested in it, we even do it here at Grad Coach. So do take a look for the link down below. So our third mistake that frequently comes up is not specifying what type of coding you are doing in advance of actually jumping into the analysis. Alexandra, why do we need to be aware of what coding type we are using so early in the process?

Speaker 2: This goes back to the idea of making sure that all steps of your research align with the previous one and are justifiable in terms of it makes sense. There's a reason why you're doing what you're doing in the order that you're doing it. And coding is no exception to this. So the reason why coding is so important in qualitative research is that qualitative research is inherently kind of subjective. There is this inherent human interpretation that can happen. And so one of the reasons why it is so important to do coding appropriately is to kind of add the systematicity and the academic rigor to your research that is inherently not there. And so to kind of ensure this increased objectivity of something that is inherently subjective, doing this coding, you need to consider which kind of coding will be the most appropriate to answer your research goals that you've outlined prior, going back to that notion of the golden thread. And coding inherently kind of falls into two camps. There is inductive coding and deductive coding. So on the one hand, inductive coding is an approach where you are going into your data analysis and you are kind of, you're letting the themes and the codes emerge from the data. You don't have any preconceived notions, no existing ideas of what to expect. You're really letting the data, whether it comes from interviews or focus groups, you're letting the data from those transcripts emerge into these codes. And this is best for studies such as grounded theory approaches where you don't really have any idea of what to expect or anticipate. And you're really kind of trying to explore what is out there. You're letting these codes emerge directly from the data. On the other hand, deductive coding is another coding approach where you are actually, you have some ideas about what is out there, what you're looking for, what you hope your final findings to be. And for this coding approach, it's top down where prior to even collect the data, the interviews, focus groups what have you you have developed an initial set of codes into a code book whether you've put this in say Microsoft Excel or Microsoft Word or Google Sheets etc and you have kind of looked through the existing literature on your research topic and seen what what are the potential codes out there what are the themes you're looking for And then once you have collected your data and transcribed it, you're assigning pieces of that data to those codes that you've already created in advance. And you are not looking for new codes to emerge like you did in inductive. So all codes should go into something from your codebook.

Speaker 1: I think deductive coding is most commonly used where you have a theoretical framework that you're working within or a field that is really, really well researched. There, you're not going to be starting something new. Similarly, it's also become really popular to use a mixed approach of inductive and deductive. This is primarily starting deductively with a codebook and using that codebook to lead your coding and then develop further from that with an inductive approach. It is worth noting this is a fairly new way to go about coding, and so it is important that if you are choosing to go this way, that you can justify why it is appropriate and why it is useful relative to that golden thread, those research aims, objectives, and questions. Because you You don't want to be overcomplicating things or stepping too far out of your comfort zone just because it's novel. Rather, make sure it is what you need to do, where you need to do it.

Speaker 2: That's great advice, because sometimes as graduate students, we have this urge to do something novel or do it a different way. And that should not be your motivation or your justification to do something. So even though this this kind of new way is developing and coming and becoming increasingly popular, that doesn't mean that it's right for your study. So how you know it's right for your study is going back to that notion of the golden thread. And this idea extends even beyond inductive and deductive coding, because those are kind of your your starting idea of how you're going to code. Beyond that, there are additional specific approaches that you will use for your initial or your first set of coding versus your second set of coding. As an aside here, you should absolutely do more than one round of coding. Again, this will increase the systematicity, the rigor, and kind of the credibility, so to speak, of your data analysis. and so there are many different specific coding approaches but some of the the most common ones we'll name here are starting with your open coding and so for this one this kind of approach it's very loose it's very tentative as indicated from its name it's open and so this is more suitable when you're starting out other common approaches are things like in vivo coding and so with in vivo coding, this is actually using the participants own words in your analysis, not putting your interpretation of what they said or suggesting what they meant, but actually letting the participants own words do the talking, so to speak. And so this is typically most suitable to things where you're really interested in the perspectives or points of view or experiences of your participants and then the last one we'll mention but there are still plenty more is structural coding and so we use structural coding specifically well not specifically but commonly in cases where you say have conducted an interview or focus group discussion and you want to use those questions that you posed in the interview or the focus group kind of as headings all of the codes that go under one specific column for instance should be related to one specific question that was asked in the data collection and so this is really best if you are kind of looking for specific answers or codes or themes in response to one of your interview questions so or focus group questions so again there are still plenty more out there but these are some of the more common coding approaches.

Speaker 1: That's really helpful, Alexandra. And it can feel a little overwhelming that there are so many options to choose from. Don't worry, there are a ton of resources out there. Definitely take a look at any of your methodological textbooks from a qualitative perspective. You can take a look at methodology papers that have been published, YouTube tutorials, blog posts, you name it, it's out there. We even have some videos and some content about coding as well on the Grad Coach blog. Links to that will be down in the description below. But importantly, when you are considering these coding decisions, it is important to realize again what you are using them for. So look for that alignment, make sure it is on track, and then it will flow much smoother going forward as well. So our fourth common mistake is students downplay the importance of organization during both coding and analysis. How important is organization, Alexandra? It is so important. The reason why

Speaker 2: this is so important is that oftentimes we kind of assume that qualitative research and qualitative data cannot be structured. Of course, it's not as black and white or objective as quantitative research. And so what you need to do as a qualitative research is to kind of apply a framework that yourself that will promote this kind of objectivity, systematicity. And part of this relies on organization. And organization is important not only for the coding, but also the analysis. So part of the difficulty, but the importance of organizing is that sometimes the codes that you end up with after you've transcribed and done your, let's say, initial round of coding, you can end up with very high numbers of codes. For instance, I've seen some where it's upwards of 1000 codes. And so this number is very overwhelming, very large. and some of the ways to tackle this large amount of codes is one to make sure that you're organizing all of your codes in a spreadsheet of sorts whether it's excel or google sheets having them all in one place will then further facilitate you doing additional rounds of coding which we recommended previously and in doing so having these additional rounds of coding on your codes that are organized in one place, it will help you kind of whittle down these codes to the point where you have the codes that you need. There's none that are kind of superfluous or repeated, but it's very important to keep these organized in one place and to go through multiple rounds of coding. And this will make your life a whole lot easier and make sure that you have only the

Speaker 1: codes that you need and can justify. I think that's super helpful. It's also worth emphasizing that coding and organization it's a back and forth you're going to be moving from one to the next and back again and that's a good thing to do it enriches your analysis but it also allows your organization to inform your coding and your coding to inform your organizational structure and through that iterative process you're really going to develop the analysis so don't think I've coded it once, I'm done and dusted. Sorry to say it's a multiple approach. In terms of organization helping analysis, Alexandra, why is it also important to keep a track in that Google document

Speaker 2: or sheet of all your codes? Yeah, so this goes back to that notion we've repeated several times of the golden thread. So if you think of dominoes, for instance, you need to have your dominoes set up in such a way that if you knock one down, the rest go down. We can think of that, our qualitative research in such a way. And so if in the coding stage, everything has aligned with that golden thread and we move on to the analysis, the analysis will be further aligned with the coding, the transcription, the data collection, going back to the research questions, aims and objectives. And so having our codes organized in a sheet will then allow us to start to analyze our codes in a way that we can see themes and patterns emerging that are aligned with the codes, which will then add this rigor and systematicity of your study by having analysis that you know is based on very organized, solid foundations of your coding and your transcription. And so through this analysis, if we have our analysis organized, we can keep track of our patterns, our themes, and then going beyond that, actually, when we get to the point where we're writing our findings chapter, we have this set organization that will then kind of allow us to know how we're going to present these results because everything has been organized and justified up to that point.

Speaker 1: I think that's really helpful. It's also worth noting that having your codebook organized can be really helpful in sort of preventing you from getting stuck with your analysis or feeling like you're unsure of how to code because, you know, things are feeling uncertain. If you have an Excel sheet that you've developed before you start your coding process, you have it organized by the different rounds and you start bringing it from a large number of codes to the specific codes you are going to be using, that organization really helps make that process move forward. And it can be kind of cathartic to really work through that process, get it from a hundred transcripts of 30 minutes each down to some key findings. So our fifth and final mistake that we're covering today is not considering your researcher influence on your analysis. Alexandra, how do we affect our analysis and why is this something that we need to even think about?

Speaker 2: Yeah, so this kind of just goes back to the innate nature of qualitative research. It relies a lot on interpretation. It is subjective. It's not inherently black and white, such as quantitative research. And so the ways that this is kind of mitigated is through things like positionality and reflexivity. So these two concepts are becoming much more prominent and required in qualitative dissertations and theses. And so what these essentially mean is that you have your positionality, which are the underlying kind of beliefs, judgments, opinions, perceptions, all of those things that kind of make you you, the human elements. And so the way that you think about things might be different than the way someone else thinks about them. And so why we need to state our positionality in qualitative research is that it can impact our interpretation of the data, which then impacts the findings. And so, for example, in an example study where someone is exploring the perceptions of the tech industry of men versus women, a researcher who kind of identifies as a feminist versus one who identifies as more conservative or traditional, they might have underlying beliefs or assumptions about gender when it comes to the workplace or just in general. and so acknowledging that that you have these kind of underlying preferences or perspectives what have you it's important to acknowledge that because like i said it can have consequences for your analysis and your findings taking this a step further typically now we also have to to talk about our reflexivity in qualitative research and so essentially what this refers to is how our positionality affects our kind of interpretation so whereas positionality has to do more with the underlying assumptions reflexivity is taking those underlying assumptions and acknowledging how they might actually impact our interpretation and our findings and so the reason oftentimes why these are required now in qualitative studies is that this idea of you know validity and reliability we don't really use those in qualitative research we use more of these ideas of trustworthiness and that connects to our positionality and our reflexivity this reflexivity how it can impact you know it can impact the coding of your data the themes that you pull from the coding how you interpret it how you present it so in my example of the researcher who has more feminist underlying beliefs versus more traditional conservatives even if they're exploring the same phenomenon they can have vastly different interpretations and so acknowledging your positionality and indicating with your reflexivity how it might impact those steps of the research analysis can lend more credibility and more kind of trustworthiness to your your

Speaker 1: findings and ultimately your study. So that's really helpful to think about these aspects because we do need to consider how our positionality and our reflexivity might affect how we proceed with our analysis. There are potential opportunities for bias and if we're engaging in these behaviors we are able to a mitigate them during the analysis and in cases where you cannot mitigate it you can at least acknowledge it so other researchers can interpret that going forward but bias goes a little bit beyond just your positionality and reflexivity so Alexandra what other biases can come up because of research effect yes this idea of bias so going

Speaker 2: further beyond positionality and reflexivity it can be very easy to have biased interpretations and there are a few ways this can manifest so for instance spending too much time presenting the the findings from one particular participant in your study and neglecting those of the others and so one reason why this might happen is either you as the researcher totally agree personally with their perspective or even totally disagree and you want to to present that in um in some for some sort of reason um so it's very important to kind of mitigate that bias by presenting a balanced approach of all participants on the other hand there's also things like spending a lot of time presenting on one particular theme that emerged from your qualitative analysis and you know, kind of avoiding or neglecting the other ones. So this can happen where you found a theme that emerged from your analysis that was particularly interesting to you, whether it was novel, whether it confirmed what you thought, or even aligned with your personal beliefs. It's very important to make sure that you are giving enough attention to all the different themes that have emerged. And a third common bias that we see is that sometimes it can be easy to make claims or assumptions such as this means that or people should do this. So for instance, in my example of the tech industry and gender norms, making claims in your writing such as women in the tech industry felt that, or the way that the women in the tech industry talked means that, or the tech industry should do that. So making those kinds of grand sweeping claims that your qualitative findings mean some sort of big, big thing. We really have to try to avoid that in qualitative writing, despite it being tempting, especially if it aligns with our personal perspective. So, those are some common biases we see.

Speaker 1: I think that is super helpful to think through, particularly because biases are inherent to us. So, it is important to take that step back, to think about how you might interpret, interact with things, and then engage with that. One way to really go back to this is take a look at the data. We do not want to be making statements or assumptions that do not have support in the data. that is just gonna undermine your argument and your position as the researcher. So wherever possible, if you don't have data to support it, maybe consider not including it. If you do have data to support it, maybe just confirm with a second opinion, your supervisor or someone else, just to make sure that there's not bias coming in. But I think the most important part here is to think about the fact that we do have biases. And so as long as we're considering this, we're doing our due diligence as researchers.

Speaker 2: Yeah, and so one of the ways that you can also make sure that you are kind of following what you said you were going to do from the get-go is not to step out of your codes and your themes that you've established. The reason why this might be tempting to do, again, is going back to that fact that maybe you found something super interesting to you and you want to present it. What I would caution you towards is making sure that any findings that you're presenting fit or align with what your objectives, aims, and research questions were. Another reason why this might happen is because the dissertation or the thesis is such a long process, sometimes we can kind of get away from our original intent of our study. And so presenting these things that are outside of our codes or our themes, we think we can get away with but in reality this kind of minimizes the the rigor of of your findings and so even though you might find something very interesting like you said David be really careful make sure that you're still kind of staying within your codes within your themes and following that golden thread that you've been establishing throughout yeah you've

Speaker 1: probably heard it so much today but golden thread is key we want to make sure that we're maintaining alignment with our research. It is only going to improve the impact. So Alexandra, thank you so much for joining us today. It has been really great. There are some great insights here and thank you again for joining us on the CoachCasts. Always a pleasure, David. Thanks so much for

Speaker 2: having me and letting me kind of chat about these qualitative foibles.

Speaker 1: Alright, so that pretty much wraps up this episode of Grad Coach TV. Remember, if you are looking for more information about qualitative research analysis, be sure to check out our blog at gradcoach.com forward slash blog. There you can also get access to our free dissertation and thesis writing mini course that'll give you all the information you need to get started with your research journey. Also, if you're looking for a helping hand with your dissertation, thesis or research project, be sure to check out our one-on-one private coaching service where you can work with one of our friendly coaches, just like Alexandra. For more information and to book a free consultation, head over to gradcoach.com.

techradar

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

cross coding qualitative research

Home Market Research

Cross-Cultural Research: Methods, Challenges, & Key Findings

Cross-cultural research

Understanding cultural differences isn’t just a nice-to-have; it’s essential. As a business leader navigating global markets, as an educator working with diverse students, or simply curious about how culture shapes our lives, cross-cultural research offers invaluable insights. 

This field of study digs deep into how people from different cultures think, behave, and interact, revealing patterns that can transform how we approach everything from communication to problem-solving.

But how do researchers tackle the complexities of studying such diverse groups? What challenges do they face, and what fascinating discoveries have they made along the way? In this blog, we’ll explore cross-cultural research methods, challenges, and key findings, giving you a front-row seat to this fascinating and ever-relevant field.

What is Cross-Cultural Research?

Cross-cultural research explores and compares different cultures to understand how cultural factors shape people’s behaviors, thoughts, and social practices.

Formerly behavior science research, cross-cultural research now extends beyond individual behaviors to explore how cultural contexts influence diverse social practices and interactions globally.

It involves studying and analyzing various cultures to uncover how cultural differences and similarities influence human behavior and social dynamics. It helps us see beyond our cultural perspective and gain insights into how people in different parts of the world live and interpret life.

An example is when we want to understand how different cultures celebrate the New Year. A cross-cultural study would involve studying various New Year traditions worldwide, such as fireworks in the U.S., the Lunar New Year celebrations in China, and the unique customs in Brazil. By comparing these practices, we can learn what different cultures value and how they express their hopes and dreams for the coming year.

Why Cross-Cultural Research Important for Your Business

Cross-cultural study is essential for businesses operating in a global marketplace because it provides critical insights into how cultural differences impact various aspects of business. Here’s why it’s important:

1. Helps to Understand Consumer Preferences

Cross-cultural research helps businesses adapt to meet consumers’ specific needs and preferences in different cultures.

  • Marketing strategies

It also improves the chances of successful market entry by aligning offerings with local tastes and expectations.

2. Offer Effective Communication and Marketing

This research ensures that marketing messages, advertisements, and brand perception are culturally appropriate and resonate with local audiences, avoiding potential misunderstandings or offenses. It allows businesses to create marketing campaigns that appeal to diverse cultural groups, increasing customer engagement and effectiveness.

3. Improve Customer Experience

Cross-cultural research provides insights into cultural expectations for customer service , helping businesses create their support approaches to different cultural norms. It Increases customer satisfaction by addressing cultural nuances in service delivery and interactions.

4. Helps to Navigate International Business Practices

Cross-cultural research aids in understanding different negotiation styles and business practices across cultures, which is crucial for successful international deals and partnerships. This research helps businesses navigate local regulations and business practices that vary from one culture to another.

5. Build Stronger Global Teams

Promotes better teamwork and collaboration among employees from diverse cultural backgrounds by creating mutual understanding and respect. It also enhances leadership and management practices to effectively lead teams in different cultural contexts.

8. Facilitate Global Expansion

Cross-cultural research assists in developing strategies for entering and establishing a presence in new international markets by understanding local cultural dynamics. It helps to 

  • Improves the ability to build successful partnerships
  • Alliances with local businesses and stakeholders.

Methods of Cross-Cultural Research

By employing a cross-cultural method, scholars and businesses can gain valuable insights into how culture shapes experiences and interactions. Here, we will explore the key methods used in cross-cultural study and their applications.

1. Surveys and Questionnaires

Surveys and questionnaires are widely used in cross-cultural research to collect quantitative data from many participants across different cultures. These tools help researchers gather information on attitudes, beliefs, and behaviors.

How It Works:

  • Design: Develop culturally relevant questions and ensure they are translated accurately to avoid misunderstandings.
  • Distribution: Administer the survey across multiple cultural groups.
  • Analysis: Compare responses to identify cultural differences and similarities.

Example: A survey examining attitudes towards work-life balance across different countries can reveal how cultural values influence workplace expectations and employee satisfaction.

2. Interviews

Interviews provide in-depth qualitative data and allow researchers to explore individuals’ experiences and perspectives in detail. They are particularly useful for understanding complex cultural phenomena.

  • Format: Conduct structured, semi-structured, or unstructured interviews depending on the research goals.
  • Cultural Sensitivity: Be aware of cultural norms related to communication and interaction.
  • Analysis: Analyze interview transcripts to identify themes and cultural patterns.

Example: Interviews with business professionals from different countries can uncover how cultural values influence negotiation styles and decision-making processes.

3. Observational Studies

Observational studies involve watching and recording behaviors in natural settings without interfering. This method provides insights into real-world cultural practices and social interactions.

  • Setting: Choose a naturalistic or controlled setting where cultural behaviors can be observed.
  • Data Collection: Record behaviors and interactions while taking note of cultural context.
  • Analysis: Analyze observations to understand cultural norms and practices.

Example: Observing social gatherings in various cultures can help researchers understand cultural norms around hospitality, etiquette, and group dynamics.

5. Experiments

Experiments in cross-cultural research test hypotheses about how cultural factors affect behavior by manipulating a dependent variable and observing outcomes.

  • Design: Create experiments that are culturally relevant and ensure that experimental conditions are equivalent across cultures.
  • Implementation: Conduct the experiment in different cultural settings.
  • Analysis: Compare results to determine how cultural factors influence the outcomes.

Example: An experiment testing the impact of different advertising messages on consumer behavior across cultures can reveal how cultural values affect marketing effectiveness.

6. Case Studies

Case studies involve in-depth analysis of a single or a few cultural cases to explore specific phenomena or issues in detail.

  • Selection: Choose cases that represent significant cultural practices or social issues.
  • Data Collection: Use multiple methods such as interviews, observations, and document analysis.
  • Analysis: Provide a detailed account of the case, highlighting cultural influences.

Example: A case study of a successful international joint venture can provide insights into how cultural compatibility and differences affect business partnerships.

Applications of Cross-Cultural Research

Understanding how cultural differences and similarities influence human behavior can lead to more effective strategies, policies, and practices. Here’s a look at some key applications of cross-cultural research:

1. Global Business Strategy

Cross-cultural research helps businesses to:

  • Create their products 
  • Improve their services 
  • Set their marketing strategies 

It also helps to align with local cultural preferences and market conditions. It provides insights into how cultural factors influence purchasing decisions, enabling companies to design more effective products and marketing campaigns. Additionally, it improves understanding of negotiation styles and business practices across different cultures.

2. Marketing and Advertising

In marketing and advertising, cross-cultural research guides the creation of messages and campaigns that are sensitive to and respectful of cultural norms. This approach helps businesses position their brands in a way that appeals to diverse cultural groups, enhancing brand acceptance and customer loyalty.

3. Human Resources and Management

In human resources, understanding cultural differences in communication styles, work ethics, and leadership preferences helps in managing a multicultural workforce more effectively. It also informs the design of cross-cultural training programs, which are crucial for employees working in diverse teams and international settings.

4. Product Development and Design

Designing products for a global market involves more than just functionality; it requires an understanding of cultural differences such as. 

  • How can cross-cultural research help identify local preferences and needs for product design?
  • What cultural factors should be considered to ensure a product is intuitive for users from different backgrounds?
  • How can understanding cultural differences enhance user satisfaction and product acceptance?

Cross-cultural research helps ensure that products are build to various cultural contexts, making them more intuitive and user-friendly for people around the world. By addressing these cultural factors, designers can create products that resonate with a diverse audience and enhance overall satisfaction.

5. Healthcare and Public Health

In healthcare and public health, cross-cultural research informs the development of practices and policies that respect diverse cultural beliefs and practices. It also guides the creation of effective health education and promotion campaigns build to different cultural contexts.

6. Education

In education, cross-cultural research supports the development of inclusive curricula that reflect diverse cultural perspectives and address the needs of students from various backgrounds. It also enhances teaching methods by incorporating culturally relevant materials and approaches, improving educational outcomes for students from different cultures.

7. Policy Making

Cross-cultural research assists in crafting policies that consider cultural diversity and address the needs of different cultural groups. This leads to more equitable and effective governance. Additionally, it enhances diplomatic efforts by creating mutual understanding and respect between nations through awareness of cultural differences and commonalities.

8. Research and Academic

In academic research, cross-cultural studies provide a foundation for: 

  • Comparing cultural phenomena across societies.
  • Contributing to a broader understanding of human behavior and social practices. 

It also informs the development of theories that account for cultural diversity, enriching academic knowledge and research across various fields.

Challenges of Cross-Cultural Research

Conducting cross-cultural study comes with a set of complex challenges that can affect the accuracy and validity of findings. Understanding these challenges is essential for researchers aiming to produce reliable and respectful research outcomes. Here’s a closer look at the key challenges and how to navigate them.

1. Cultural Bias and Ethnocentrism

Researchers might unintentionally view other cultures through the lens of their own culture, which can skew the results. For example, they might assume their own way of doing things is the best or only way.

Researchers should be aware of their own biases and try to understand the culture they’re studying from the inside out. Working with local experts can help provide a more accurate perspective.

2. Language and Translation Issues

Translating research materials like surveys and interviews can be tricky. Words and meanings might get lost or changed during translation.

Use professional translators who understand both the language and cultural context. Checking translations with back-translation (translating back to the original language) and testing them before use can help ensure they’re accurate.

3. Methodological Differences

Different cultures might have different ways of doing research or different norms. What works well in one culture might not be suitable in another. 

Adapt research methods to fit the cultural context while keeping scientific standards. Combining different methods, like surveys and interviews, can provide a fuller picture.

4. Data Interpretation and Analysis

Understanding data from different cultures can be challenging. Without cultural knowledge, it’s easy to misinterpret findings. 

Combine quantitative data (numbers) with qualitative insights (detailed information) for a better understanding. Collaborate with local experts to ensure accurate interpretation.

How QuestionPro Helps in Cross-Cultural Research?

Cross-cultural research helps us understand how people from different cultures think and behave. Doing this well can be tricky, but QuestionPro offers tools that make the process easier and more effective. Here’s how QuestionPro helps researchers tackle the challenges of studying diverse cultures:

1. Multi-Language Surveys

One of the critical aspects of cross-cultural study is the ability to reach participants in their native languages. QuestionPro supports surveys in multiple languages, allowing researchers to create and distribute surveys that cater to diverse linguistic groups. This feature ensures that participants fully understand the questions, leading to more accurate and reliable data.

2. Cultural Adaptation

QuestionPro allows for the cultural adaptation of surveys. This involves more than just translating the text; it includes adjusting the content to ensure that it is culturally relevant and appropriate. QuestionPro’s platform supports the customization of survey content to match cultural contexts, enhancing the validity of the research.

3. Global Reach with Online Panels

QuestionPro provides access to a vast network of online panels, enabling researchers to target specific cultural groups across the globe. This feature is particularly valuable for comparative studies that require large and diverse sample sizes. Researchers can filter participants based on demographic criteria such as: 

  • Location and more

4. Data Segmentation and Analysis

Once data is collected, QuestionPro offers advanced data segmentation and analysis tools that allow researchers to compare responses across different cultural groups. These tools make it easy to identify patterns, trends, and significant differences between cultures. The platform supports: 

  • Cross-tabulation
  • Advanced statistical analysis

It helps researchers draw meaningful insights from their data.

5. Cultural Sensitivity in Survey Design

QuestionPro provides guidelines and best practices for designing surveys that are culturally sensitive. This includes advice on question-wording, avoiding cultural biases, and using neutral language. The platform’s templates and question libraries can also be created to fit the cultural context of the comparative study, ensuring that the research is respectful and considerate of cultural differences.

6. Real-Time Collaboration

Cross-cultural research often involves collaboration between researchers from other countries or regions. QuestionPro’s platform supports real-time collaboration, allowing teams to work together on: 

  • Survey design
  • Data collection 

This feature creates international cooperation and ensures that all team members are on the same page throughout the research process.

7. Mobile-Optimized Surveys

In many cultures, especially in developing regions, mobile devices are the primary means of accessing the internet. QuestionPro’s mobile-optimized surveys ensure that participants can easily respond to surveys using their smartphones or tablets, increasing response rates and making it easier to reach diverse cultural groups.

Cross-cultural research is a powerful tool for understanding the diversity of human behavior and the ways in which culture shapes our lives. By using a variety of methods and being mindful of the challenges involved, researchers can uncover valuable insights that contribute to a more inclusive and culturally aware world. As our world becomes increasingly interconnected, the importance of cross-cultural study will only continue to grow.

QuestionPro is an invaluable tool for cross-cultural research, offering features that address the unique challenges of studying diverse cultures. QuestionPro equips researchers with the tools they need to conduct rigorous and culturally sensitive cross-cultural studies. Researchers can gain deeper insights into cultural differences and contribute to a better understanding of global diversity. Contact with QuestionPro to learn more!

MORE LIKE THIS

cross coding qualitative research

Why You Should Attend XDAY 2024

Aug 30, 2024

Alchemer vs Qualtrics

Alchemer vs Qualtrics: Find out which one you should choose

target population

Target Population: What It Is + Strategies for Targeting

Aug 29, 2024

Microsoft Customer Voice vs QuestionPro

Microsoft Customer Voice vs QuestionPro: Choosing the Best

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence
  • Open access
  • Published: 27 August 2024

Situational Judgement Tests among Palestinian community members and Red Crescent volunteers to inform humanitarian action: a cross-sectional study

  • L. S. Moussaoui   ORCID: orcid.org/0000-0003-0392-7402 1 ,
  • M. Quimby   ORCID: orcid.org/0000-0002-5033-4039 2 ,
  • H. Avancini 3 ,
  • A. Salawdi 4 ,
  • F. Skaik 4 ,
  • R. Bani Odeh 4 ,
  • O. Desrichard   ORCID: orcid.org/0000-0003-3269-8813 1 &
  • N. Claxton   ORCID: orcid.org/0000-0001-8891-2239 2  

Archives of Public Health volume  82 , Article number:  141 ( 2024 ) Cite this article

Metrics details

Informing humanitarian action directly from community members is recognized as critical. However, collecting community insights is also a challenge in practice. This paper reports data collected among community members and Red Crescent volunteers in the occupied Palestinian territory. The aim was to test a data collection tool, situational judgment tests (SJTs), to collect insights in the community around three themes.

The SJTs covered violence prevention, road safety, and environmental pollution (waste), and were constituted of hypothetical scenarios to which respondents indicated how they would react. For each theme, the answers’ pattern provides insights for humanitarian action regarding which beliefs to address. A cross-sectional survey was conducted in January and February 2023 with 656 community members, and 239 Red Crescent volunteers.

Data showed that violence is the theme for which the need is the highest among community members. Some responses varied according to the public (age, governorate, or disability level), suggesting actions could be tailored accordingly.

Conclusions

Despite many difficulties during data collection, this study show that the tool allowed to collect community insights, a crucial task to ensure adequate response to the challenges faced by community members and Red Crescent volunteers in occupied Palestine.

Peer Review reports

Text box 1. Contributions to the literature

• Self-report survey is the most frequently used method to measure beliefs and perceptions among community members

• Self-report survey limitations are recognized but there are limited alternatives to collect data in the field

• This paper reports the test of a method, situational judgements tests, originally used in occupational psychology (employees’ selection, school admissions) as an alternative to measure beliefs and norms

• Situational judgements tests allowed to collect data in Palestinian communities about three different topics (violence, road safety, waste management)

Local actors are crucial informers of the needs of a community, as was formally recognized at the World Humanitarian Summit in 2016 [ 1 ]. The focus on community engagement indicates that the communities being served are a critical part of the team that conducts the assessment, planning, implementation, and evaluation of any intervention plan which exceeds the general expectation that “services are responsive to community needs and inputs” [ 2 ]. Minimum quality standards and indicators for community engagement [ 3 ] ensure, among other things, that communities are meaningful stakeholders in ongoing two-way communication and that programs are aligned with local needs and priorities and are decided with and by community members.

Over 154 Red Cross and Red Crescent National Societies rely on this strategy for the delivery of programs using a Community-Based Health and First Aid (CBHFA) approach with volunteers from the communities leading the identification of challenges, issues and barriers as well as leading the solving of any issues with and by community. Few studies have examined the community-based approach, especially in conflict-affected contexts [ 4 ]. Moreover, a lack of standardization in data collection for community engagement has been highlighted [ 2 ]. This study aims to report the use of Situational Judgement Tests as a method to collect community insights in three specific areas from Palestinian community and Palestine Red Crescent volunteers working in select areas of the occupied West Bank.

Advantages of community insights

In occupied Palestinian territory, local actors such as Red Crescent volunteers serve in the very communities where they live, work and play. This community-level access brings many advantages for preparedness for crises as well as in humanitarian response: community health workers and volunteers have a deep understanding of the unique context and the changing needs [ 5 , 6 ], and, unlike international responders, they are a constant presence before, during and well after a crisis [ 4 ]. They overwhelmingly contribute to ongoing access and support in the long term [ 7 ]; they know the unique needs of their community, speak local languages and dialects; and, they understand the culture to know what is feasible and what is not changeable. Community-based volunteers designate people who experience the conflict and suffer its consequences [ 4 ] while also providing support to others in the same situation. In this paper, local actors/community-based volunteers encompass community members and Red Crescent volunteers.

Some rare papers report examples of the use of community insights to guide organizational responses. Most of them are in the field of emergency response to epidemic response. One example is Bedson et al. [ 8 ], who used quantitative (epidemiological) and qualitative data to develop Community-led Ebola Action approach . Qualitative data measured the most commonly expressed concerns, the perceived risks of contracting Ebola, and action plans developed in the community. Data was used to establish feedback loops between communities and authorities, and inform response services.

Baggio [ 9 ] presents how real-time data was used during the Ebola outbreak in the Democratic Republic of Congo. Red Cross volunteers measured the perceptions and needs of affected communities thrice weekly to tailor the response that aid organizations could provide. Data was collected using a simple form recording comments, which were then entered in a Microsoft Excel log sheet. The analysis of the content provided adapted communication and programs following local trends, for example, to address misconceptions.

Building upon the experience with Ebola, Erlach et al. [ 10 ] report how community feedback was used to guide Red Cross and Red Crescent response against COVID-19 in Sub-Saharan Africa. Similarly, content emerging from interactions with community members (such as questions, beliefs, and rumors) was collected and translated into priority responses.

Colombo and Pavignani [ 11 ] analyzed key failings in humanitarian health action. Among others, one that is especially relevant for this paper is the poor communication between humanitarian workers and those they are meant to serve (or between distant managers and frontline workers), which creates distance and inadequate response, because it is not contextualized nor tailored to local needs. Colombo and Checchi [ 12 ] further argue that a culture of immediacy in the response provision, deriving from addressing basic needs at the beginning of an acute crisis, might lead to a tendency to act without a proper situation analysis.

In the cases where, as in CBHFA programming, local residents are affiliated with the organization providing support, the distance mentioned by Colombo and Pavignani [ 11 ] disappears. However, using data to inform action advocated by Colombo and Checchi [ 11 ] is not straightforward even if locals/community-based volunteers constitute the team that will provide a program. Volunteers might provide their own perception of the priorities, but they don’t necessarily have access to the beliefs held by all community members, and are not necessarily representative of the general population in terms of demographics. For example, in the occupied Palestinian territories, a majority of volunteers are women, because CBHFA was originally formed as mothers’ clubs and the population still see CBHFA as female-focused. Thus, collecting data directly in the communities while necessary, comes with difficulties.

Difficulties associated with community insights

One difficulty evoked in the paper by Erlach et al. [ 10 ] on the system set-up to collect community insights on COVID-19 perceptions is the impossibility of matching the collected inputs to sociodemographic variables (because of the way feedback was collected), preventing a finer analysis from being conducted. The authors advocate for triangulation with structured surveys. In the same vein, Bedson et al. [ 2 ] evoke that there has been limited standardization in data collection as part of community engagement and argue for adoption of standardized measurements.

SJT as a tool to collect data from the community and the volunteers to guide programs

Some limitations are associated with the use of structured surveys. Notably, Likert scales are widely used to measure respondents’ opinions and beliefs but have been evaluated as not optimal [ 13 , 14 , 15 ]. The format (order of the response options and direction of the question) influences answers [ 13 , 14 ]. And cultural differences have been highlighted in terms of answer style or difficulty in choosing one option [ 15 , 16 , 17 , 18 ]. In a study in Sierra Leone, Moussaoui et al. [ 19 ] used Situational Judgment Tests (SJTs), most of the time used for school admissions and employees’ selection, to measure beliefs and norms of the community about several topics of sexual and reproductive health. SJTs are hypothetical scenarios in which the interviewee is asked to select their typical response among several possibilities. In the context of job and school selection, meta-analyses have demonstrated that they are reliable and valid predictors of performance [ 20 ]. Meta-analytical data also supported the tool’s satisfactory test-retest reliability [ 21 ]. Compared to other assessment methods, they are convenient for large-scale delivery and cost-efficient once developed [ 22 ]. According to Lipnevich et al. [ 23 ], SJTs are less subject to bias frequently present in self-reports, notably trying to guess which answer the interviewer expects. SJTs are written to make it difficult to give what people may feel is the ‘right’ answer as the tests pose situational prompts that the interviewee encounters or may encounter in their lives within a specific context. Situational judgment test items consist of two elements: the scenario that gives the situation to be solved and the possible actions from which a person can choose.

In a previous study in Sierra Leone [ 19 ], data showed that SJTs answers had positive but of moderate magnitude correlations with self-report items on the same topic. For example, answers to an SJT about one’s hypothetical reaction if their partner slaps them correlated with the knowledge of action to take if violence is witnessed. Among respondents who did not know what immediate action to take if they were witnesses of violence, none chose the highest-scored SJT answer (telling the person they are no longer together). On the contrary, among persons who knew three or more actions to take if they were witnesses of violence, none of them answered the lowest-scored SJT answer (put up with the abuse and hope it gets better). Although preliminary, those results suggest SJT might be an interesting tool to measure stigma-sensitive norms.

Identifying a reliable tool to measure beliefs in communities is essential because if the answers are biased (when respondents are trying to provide the “correct” answer), then the situation analysis will be biased too, and the resulting program will likely miss its point. Aside from reliability, authors have argued for developing easy-to-understand and acceptable (in terms of topic sensitivity) questionnaires [ 12 ]. SJTs have the advantage of presenting daily life situations with a range of plausible responses. Thus, it should be easier for respondents to understand the question than a more abstract way of asking questions.

Focus on the Palestinian context and the three themes studied

We used three SJTs within a broader existing survey to measure norms and beliefs of a select group of villages served by Palestine Red Crescent volunteers. The three questions posed focused on three topics identified in recent Vulnerability and Capacity Assessments which were considered local priorities. The three SJTs consisted of one question each on violence, road accidents, and waste management.

The first topic covered in this study is violence. In the context of the Israeli-Palestinian conflict, political violence has been present for decades [ 24 ]. It is noted that the unique context of occupied Palestine by Israel posed considerably higher levels of stress than in other complex situations. However, the issue of violence is not restricted to political violence only, as studies showed that exposure to political violence is associated with other types of violence, such as family violence [ 25 ] and intimate-partner violence [ 26 ]. A study published in 2020 showed that half of a sample of young Palestinians had personally been victims of violence, and more than two-thirds witnessed or heard about violence perpetrated on a close one [ 27 ]. As stated in a WHO report [ 28 ], “deaths and injuries are only a fraction of the burden” (p.8). Studies have shown that violence exposure has consequences for the mental health of individuals. Notably, Wagner et al. showed that young Palestinians more exposed to violence had higher rates of global distress, depression and anxiety, and the effect was stronger for females [ 27 ]. The SJT we posed in the survey referenced staying safe from violence in general.

Around the world, the number of road traffic crashes and related deaths is extremely high [ 29 ]. Road accident causes injuries, deaths, and are also an economic burden for the country [ 30 ]. In occupied Palestinian territories, data show an increasing trend in accidents between 1971 and 2001 [ 31 ] and, more recently, between 1994 and 2015 [ 32 ]. Differences have been noted according to regions (e.g., West Bank compared to Gaza Strip) explained by the difference in population density and motorization rates [ 33 ]. The authors analyzed that 75.8% of accidents are caused by drivers’ lack of adherence to traffic law and improper driving [ 33 ].

Waste mismanagement is another issue preponderant in occupied Palestinian territories but also worldwide. A report from the World Bank cites a (conservative) estimation of at least 33% of waste being mismanaged through open dumping or burning [ 34 ]. Ferronato and Torretta [ 35 ] review the environmental impacts of open dumping and open burning and cite marine litter, air, soil and water contamination, and the health impacts due to exposure to hazardous waste as the main issues. The lack of sanitary landfills in occupied Palestinian territories leads to dumping and open burning [ 36 ]. A study based on a national household sample survey showed that organic waste accounts for more than 81% of residential solid waste [ 37 ]. In the occupied West Bank, it has been estimated that there are 0.034 mechanical treatment facilities per 100’000 inhabitants and zero composting facilities for the same number of inhabitants [ 38 ]. Thus, the authors suggest that home composting could be an effective approach for this waste fraction.

Research question

Can SJTs be used to measure the norms and beliefs around violence, road safety, and environmental pollution? And do this method allows to detect variations in answers according to socio-demographic groups and geographical regions?

Both qualitative and quantitative data collection methods were employed in the broader survey, including secondary data review, a community member survey, a volunteer survey, focus group discussions, and key informant interviews. As part of the community and volunteer surveys, three SJT questions were inserted in each survey, addressing three of the top public health risks. These SJTs were included within the community and volunteer surveys in an attempt to measure norms and attitudes around some specific health behaviors.

Sampling strategy

The sampling strategy was based on population data (male and female above 18 years old) and volunteer numbers in each governorate (Hebron, Bethlehem, and Central/Jerusalem). The team aimed for a disproportional stratified random sample, which meant that we wanted approximately an equivalent number of respondents for each group to be able to compare them. The objective was to secure at least 68 community members respondents of each gender in each of the three governorates, leading to a planned minimum of 408 respondents.

Palestine Red Crescent Society (PRCS) volunteers served as enumerators for both surveys (Community survey and Volunteer survey). The enumerators were trained in a face-to-face session in Ramallah with the sampling strategy clearly laid out. However, due to travel restrictions from events related to the ongoing conflict (data collection took place during January and February 2023), the intended strategy was not possible. In following up with enumerators and their leads, it was determined that it was only safe to conduct the surveys via phone. Additionally, the volunteers resorted to a snowball method to gather respondents – calling people that they knew and asking for additional contacts within the three governates. It was shared that people did not answer the phone if the phone number was unknown, which was a limitation.

In the end, a total of 656 community members answered to at least one of the three SJT questions across the three governorates. 64.6% of the respondents are female and 35.4% are male. The youngest community member who responded is 19 years of age, while the oldest is 97 years of age (mean age = 41.2).

239 volunteers responded to at least one of the three SJT questions on the volunteer survey. 79.5% of responding volunteers are female, while 20.5% are male. The youngest responding volunteer is 19 years of age and the oldest is 80 years of age (mean age = 33.8).

The community and volunteer surveys, inclusive of the suggested SJTs were prepared, discussed, and edited between the lead researchers, Swedish Red Cross, and Palestine Red Crescent. They were then translated by Palestine Red Crescent. There was not a secondary review of the translations to ensure accuracy of the questions and responses.

Just as real-world situations are never entirely black or white, SJT scenarios sometimes do not have just one right or wrong answer. The response options of an SJT item can contain one action that is the most appropriate for the question asked in that situation (which earns full points), one or two actions that are somewhat appropriate (and earn partial points), and one or two actions that would be inappropriate for the question asked in that situation (earning no points). The answer options available are clear actions to take (or ways to behave) rather than results of actions. Each action is intended to be logically possible for the specific scenario (even wrong ones).

Table  1 provides the detailed content of the SJTs used in the study for the three topics, in community and volunteers’ versions.

Violence SJT – community

In the community sample answering the violence SJT ( N  = 418 Footnote 1 ), 43.1% of respondents chose the option “Stay at home as much as possible”, 21.3% answered “Regularly attend meetings to understand risks”, 18.4% “Work with PRCS CBHFA volunteer to learn about violence prevention”, and 17.2% “Try to stay safe but do not take any exaggerated precautions” (the best answer according to our coding and agreement with PRCS).

There were differences in answer patterns according to the age of respondents, and Kruskal-Wallis test shows the difference is significant, H (3) = 24.02, p  < .001. Results are presented in Fig.  1 . It is apparent that answers from the oldest groups are more frequently “stay at home”, compared to youngest who, comparatively, try more to “stay safe without taking any exaggerated precautions”. Pairwise comparisons with adjusted Footnote 2 p-values show that the difference between the youngest group and each of the two oldest groups are significant ( p s < 0.003, r s > 0.24). All effect sizes are reported in the Appendix.

figure 1

Violence situational judgement test answers according to age in the community sample, January-February 2023, Westbank region, occupied Palestinian territories Note . PRCS CBHFA = Palestine Red Crescent Society Community-Based Health and First Aid

There was no significant difference in answers according to gender, U  = 20366.50, z  = 0.083, p  = .934. Answers varied according to governorate, H (2) = 11.84, p  = .003. Figure  2 shows that answers from respondents from the Central governorate/West Bank/Jerusalem were more frequently “stay home”, compared to answers of respondents from Bethlehem and Hebron. Adjusted p-values indicate that these differences are significant ( p s < 0.016, r s > − 0.15), but not Hebron compared to Bethlehem ( p  = .904).

figure 2

Violence situational judgement test answers according to governorate in the community sample, January-February 2023, Westbank region, occupied Palestinian territories Note. PRCS CBHFA = Palestine Red Crescent Society Community-Based Health and First Aid

Answers to the violence SJT also differed according to disability level, U  = 10208.00, z = -2.68, p  = .007, r  = − .13. Figure  3 indicates that respondents with disabilities (defined as having more than one domain of the scale with difficulties) answered more frequently staying at home as much as possible to manage stress related to violence, compared to respondents with no disabilities (one domain or less with difficulties).

figure 3

Violence situational judgement test answers according to disability level in the community sample, January-February 2023, Westbank region, occupied Palestinian territories Note PRCS CBHFA = Palestine Red Crescent Society Community-Based Health and First Aid

Violence SJT – volunteers

In the volunteer sample answering the violence SJT ( N  = 166), 16.9% of respondents chose the option “Stay at home as much as possible”, 30.7% chose the answer “Regularly attend meetings to understand risks”, 42.2% answered “Work with PRCS CBHFA volunteer to learn about violence prevention”, and 10.2% “Try to stay safe but do not take any exaggerated precautions”.

There were no differences in answer patterns according to the age of the volunteers, H (3) = 5.83, p  = .120, neither according to their gender, U  = 2621.00, z  = 1.17, p  = .243, nor governorate, H (2) = 0.76, p  = .686, and disability, U  = 1151.50, z = -1.32, p  = .188.

Road SJT – community

In the community sample answering the road safety SJT ( N  = 625), 8.3% of respondents chose the option “Tell your sons that they should not drive like you do”, 24.3% answered “Forbid your sons from driving with their friends”, 6.4% “Allow them to drive the family car only when you are with them”, 25.8% “Work with your PRCS CBHFA volunteer to lead youth sessions about road safety”, and 35.2% “Teach them the correct practices” (the best answer according to our coding and PRCS inputs).

There were no differences in answer patterns according to the age of the respondents, H (3) = 2.84, p  = .418, neither according to gender, U  = 47515.50, z  = 1.08, p  = .280. Answers varied according to governorate, H (2) = 8.08, p  = .018. Figure  4 shows that community members from Central/West Bank/Jerusalem answered more frequently to teach the correct practices to their children, while community members from Bethlehem answered more frequently to work with PRCS CBHFA volunteers. Adjusted p-values from pairwise comparisons show that Bethlehem answers differ from Hebron, p  = .04, r  = − .13, and Bethlehem also differ from Central/West Bank/Jerusalem, p  = .016, r  = − .15, but Hebron does not differ from Central/West Bank/Jerusalem, p  = 1.00.

figure 4

Road situational judgement test answers according to governorate in the community sample, January-February 2023, Westbank region, occupied Palestinian territories Note. PRCS CBHFA = Palestine Red Crescent Society Community-Based Health and First Aid

Answers to this SJT in the community did not vary according to disability level, U  = 29963.00, z = -0.42, p  = .675.

Road SJT – volunteers

Among interviewed volunteers ( N  = 181), 3.9% chose the option “Tell them every day that they must be safe”, 10.5% answered “Forbid your sons from driving with their friends”, 29.3% “Allow them to drive the family car only when you are with them”, 39.8% “Work with your PRCS CBHFA volunteer to lead youth sessions about road safety”, and 16.6% “Teach them the correct practices”.

There were no differences in answer patterns according to the age of the volunteers, H (3) = 5. 24, p  = .144, neither between male nor female respondents, U  = 2558.50, z = -0.39, p  = .697. Answers did not vary according to governorate, H (2) = 4.58, p  = .101, nor disability level, U  = 1236.50, z = -1.78, p  = .076.

Waste SJT – community

In the community sample answering the waste SJT ( N  = 231), 14.3% of respondents chose the option “Simply stack up the rubbish until the garbage collectors do come”, 9.5% answered “Give small bags of trash to different household members to get rid of each week wherever they can”, 36.8% “Watch out for vermin in the rubbish stacked up”, 29.9% “Discuss with your PRCS CBHFA volunteer about possibilities to reduce the waste problem”, and 9.5% “Throw the compostable waste into a bin for your garden” (the best answer according to our coding and PRCS inputs).

Answers to this SJT varied according to the age of the respondent, H (3) = 8.10, p  = .044. As visible in Fig.  5 , the youngest group provides more often than older respondents the answer of doing home composting. Adjusted p-values of pairwise comparisons show that the comparison between the youngest and the oldest group misses the significance threshold ( p  = .051). All other comparisons are non-significant ( p s > .233).

figure 5

Waste situational judgement test answers according to age in the community sample, January-February 2023, Westbank region, occupied Palestinian territories Note. PRCS CBHFA = Palestine Red Crescent Society Community-Based Health and First Aid

No significant difference was observed between gender, U  = 5918.00, z = -0.55, p  = .585. However, significant difference was found across governorates, H (2) = 47.74, p  < .001. Figure  6 presents the pattern of response. It is visible that respondents from Bethlehem are the ones most frequently answering “discuss with PRCS CBHFA volunteer”, while respondents from Central governorate provide answers on the lower end of the scale. All comparisons are statistically significant, Central/West Bank/Jerusalem vs. Hebron, p  < .001, r  = − .33; Central/West Bank/Jerusalem vs. Bethlehem, p  < .001, r  = .61; Hebron vs. Bethlehem, p  = .001, r  = .29.

figure 6

Waste situational judgement test answers according to governorate in the community sample, January-February 2023, Westbank region, occupied Palestinian territories Note. PRCS CBHFA = Palestine Red Crescent Society Community-Based Health and First Aid

No difference emerged for this SJT according to disability level, U  = 3649.00, z  = 0.39, p  = .69.

Waste SJT – volunteers

In the volunteers sample answering the waste SJT ( N  = 117), 8.5% of respondents chose the option “Simply stack up the rubbish until the garbage collectors do come”, 12.8% answered “Give small bags of trash to different household members to get rid of each week wherever they can”, 27.4% “Watch out for vermin in the rubbish stacked up”, 46.2% “Discuss with your PRCS CBHFA volunteer about possibilities to reduce the waste problem”, and 5.1% “Throw the compostable waste into a bin for your garden”.

There were no differences in answer patterns according to the age of the volunteers, H (3) = 6.30, p  = .098, neither between male and female respondents, U  = 1043.50, z = -1.18, p  = .237. Answers did vary according to governorate, H (2) = 24.44, p  < .001. Results are presented in Fig.  7 . The two comparisons that are statistically significant are Central vs. Bethlehem, p  < .001, r  = .59, and Central vs. Hebron, p  = .001, r  = − .36. The comparison between Hebron and Bethlehem is not significant, p  = .104. No difference emerged according to disability level, U  = 597.50, z  = 0.14, p  = .885.

figure 7

Waste situational judgement test answers according to governorate in the volunteer sample, January-February 2023, Westbank region, occupied Palestinian territories Note. PRCS CBHFA = Palestine Red Crescent Society Community-Based Health and First Aid

Sensitivity analysis- community

Because the response option mentioning PRCS could have led to more social desirability than the other ones, we conducted sensitivity analysis without this response option to test if differences emerged when testing patterns according to socio-demographic variables (age, gender), governorate and disability level.

No difference in what was statistically significant or not emerged for the road SJT analysis on the community sample.

For the waste SJT in the community sample, the age difference that was significant in the main analysis became non-significant when removing the option mentioning PRCS, H (3) = 2.49, p  = .478. The other effects did not change (i.e., gender and disability level’s effects remained non-significant, and differences across governorate remained significant).

For the violence SJT in the community sample, all effects remained the same (i.e., age and governorate remained significant and gender remained non-significant) in the sensitivity analysis except for disability level. The disability level effect became non-significant (although it remained close from the significance threshold) in the sensitivity analysis, U  = 7849.00, z = -1.88, p  = .061.

Sensitivity analysis- volunteers

For the road SJT, no difference emerged in sensitivity analysis for age, gender or governorate. However, the effect of disability became significant, U  = 403.00, z = -2.23, p  = .026, r  = − .21. As presented in Fig.  8 , respondents with more than one domain with difficulties were more likely to answer to forbid their sons from driving with friends and tell them every day that they must be safe, and less likely to allow them to drive only when being with them and teach them the correct practices.

figure 8

Road situational judgement test answers according to disability in the volunteers sample – sensitivity analysis, January-February 2023, Westbank region, occupied Palestinian territories

For the waste SJT, all effects remained similar to the main analysis (i.e., age, gender and disability remained non-significant, and governorate remained significant). Thus, there is no change in conclusion with sensitivity analysis for this SJT.

For the violence SJT, the age effect becomes significant when removing the category mentioning PRCS, H (3) = 10.28, p  = .016. The adjusted pairwise comparisons show that it is the age groups of 31–42 and 43–80 that are significantly different from one another, as presented in Fig.  9 . The older age group is more likely to answer staying home as much as possible, while the younger are more likely to attend meetings and try to stay safe without taking any exaggerated precautions.

figure 9

Violence situational judgement test answers according to age in the volunteers sample – sensitivity analysis, January-February 2023, Westbank region, occupied Palestinian territories

Gender effect remains non-significant, and this was also the case for governorate’s effect. Disability effect becomes significant in the sensitivity analysis, U  = 322.00, z = -2.23, p  = .026, r  = − .23. Figure  10 shows that volunteers with more than one domain with difficulties are more likely to answer staying home as much as possible, and less likely to attend meetings to understand risks.

figure 10

Violence situational judgement test answers according to disability in the volunteers sample – sensitivity analysis, January-February 2023, Westbank region, occupied Palestinian territories

This paper reports result of a field study using SJTs to collect community insights towards three priority topics, violence prevention, road safety, and environmental pollution (waste), in selected West Bank Palestinian communities and Red Crescent volunteers. Despite the limitations described below, the study shows that the tool worked to measure norms and beliefs in a standardized manner. An important result is that response patterns sometimes vary according to age, governorate, or disability level. This shows that the tool was able to detect those differences. It also highlights the need to target subgroups differently concerning those topics. Perhaps surprisingly, no significant difference emerged according to the gender of the respondents, neither in the community nor in the volunteer sample. This lack of difference seems to suggest that both genders share norms on those three topics.

Implications for specific SJT - violence prevention

We see that most community members – especially those with more than one disability – as well as older volunteers choose to stay home in times of violence. PRCS’ household visits can be supplemented with online WhatsApp support groups and other ways to reach people in their homes where they feel safest.

The best answer to the SJT question was to stay safe but to not take exaggerated precautions, to trust your instinct, but only 10.2% of volunteers chose this response.

Most volunteers chose the support of PRCS and attending meetings to stay aware of risks. These responses show the cultural norm of tightly formed networks of support in the communities and the reliance on PRCS and community meetings to plan. PRCS volunteers seek safety and reassurance amongst PRCS volunteers more than self-reliance. This shows the Palestinian norm of its strength in working in community and can be fostered. There can be sessions to coach in leadership skills.

Implications for specific SJT – road safety

The best answer to the SJT question was the most frequent response for community members in teaching sons to drive properly. The volunteers’ most frequent response was to work with the PRCS. This highlights how volunteers uphold PRCS as a part of their support base, with many volunteers wanting more training on road safety. Meanwhile community members want to model behaviours – one of the most effective ways of changing behaviours. This suggests that volunteers would benefit from some form of training but also practical exercises to encourage road safety practices. Using community members as models of good driving would be incredibly valuable to both volunteers and community.

Implications for specific SJT – environmental pollution (waste)

The best answer, ‘To throw compost in a bin to use in the garden to provide nutrients to plants’ was only chosen by 9.5% of the community respondents and 5.1% of volunteers. Community members are more aware of sustainable environmental pollution solutions than some of their volunteers. Yet much more education and awareness are needed. Activities can focus on ways to reduce waste that also encourages household gardens. The most popular responses for community were to watch out for vermin in the trash, and talking to the PRCS volunteer or branch supervisor for volunteers (and the second most frequent option for community members). This indicates a high level of expectation of PRCS supervisors and volunteers. Training staff and volunteers on sustainable waste management strategies like composting and household gardens is strongly suggested. There is also a chance to teach about what to do to keep safe when vermin are spotted.

Limitations

One limitation of our study is that the number of volunteers surveyed is smaller than the number of community members, leading to smaller probabilities of finding, if they exist, significant effects of demographic variables or geographical differences. Additionally, we used the same volunteers as enumerators for both the volunteer survey and the community survey. This was done for safety reasons and to ensure greater response as the PRCS is a trusted entity in communities where risk is high and outside enumerators may not have been effective. However, the downside is that volunteers interviewing other volunteers might have triggered a motivation to give “good” answers. As far as possible, future studies should try to rely on external enumerators for the volunteers group.

A limitation of the use of SJTs or other tools (e.g., Knowledge Attitudes Practices surveys) is the time and lack of fully-qualified staff to conduct the assessment and analyze the data, as highlighted by respondents in the study by White et al. [ 39 ]. Aside from the question of whether the resources to conduct data collection were available, a more complex issue is the fact that resources used for data collection could have been used differently, for example in funding relief and/or aid to those in need [ 12 ]. We argue data collected must be later used to inform programs or monitor existing actions to be able to consider that resources were used efficiently. Regarding the question of limited resources and staff, making developed SJTs publicly available as we do here ((see also 19)) may help other teams to build from existing resources and save time in tool development.

We want to point out that the hierarchy in response options can be challenged and could have been done otherwise. Our hierarchy was discussed with the various team members, and we followed the opinion of local members in the perspective that they know best what works for them in their context. For example, in the violence prevention SJT, the options to do nothing special and trust your intuition to stay safe could be perceived as insufficient from an external point of view, but was considered best by Palestinian members of the team because otherwise, people would be spending all their times in meetings due to the long-standing and pervasive issue of violence.

A number of difficulties arose during data collection, and are worth mentioning to put in perspective our results. These include: (A) four iterations of the survey were deployed during the planned data collection period due to mistakes in implementing the survey on the data collection platform. We eliminated all data that was incomplete, suspect or unreliable, which resulted in a loss of many data points. (B) ODK and the server on which it runs was not functioning on the day of the enumerator training which limited our ability to demonstrate and allow for practice of the survey in a training environment. (C) The printed survey used in the enumerator training did not reflect the most updated survey – thus the enumerator training was minimally helpful to ensuring that enumerators followed the sampling strategy or even conducted the survey effectively. (D) We were unable to adhere to sampling strategy, because there were two significant security incidents during the data collection period. It was deemed unsafe for enumerators to collect data face-to-face in some areas according to the sampling strategy. It was agreed that enumerators could call people within the collection area to administer the survey and apply snowball sampling for additional potential respondents. This diversion in strategy meant that the data was not collected from a random selected sample. Because the sample is not randomly selected, respondents might not be representative of the general population, thus limiting generalizations that can be drawn from the survey data. For example, it is possible that PRCS volunteers have socio-economic characteristics different from those of the general population (in terms of literacy and education level, for example), and people recruited via snowball sampling might share the same characteristics, creating a sampling bias [ 40 ]. (E) While the capacity of volunteers to serve as enumerators is helpful, it does lend bias when volunteers are asking community members questions around PRCS work. This is the reason why we conducted the sensitivity analysis without the response option mentioning PRCS. The latter two limitations deserve more elaboration in the specific context of oPt, which we will delve into below.

These limitations may be of strong concern to readers unfamiliar with the day-to-day complex context of life in occupied Palestinian territories. But for those who work and serve the people within these communities, we argue that despite the challenges, the need for reliable data sets is a worthwhile and necessary endeavour. The data captured in one of the most complex environments is important to ensuring that interventions and projects respond to the ongoing challenges faced by community members across occupied Palestine. The oPt is a unique context where Palestinians often suffer from a significant imbalance of power. There are imposed limits to who can access parts of oPt – including Palestinians within oPt, thus making it very difficult for outside evaluators or even Palestinians to deploy as enumerators that are unknown to the communities. With random household sampling, this would include having unknown persons visiting respondents’ homes and schools. In this context, this was not feasible or ethical. Further, the PRCS is an organisation that almost all Palestinians are familiar with – many relying on their health units, clinics and volunteers who are a ready and constant presence, in peace and in conflict, often providing assistance when there are no other organisations operating or assisting. The decision was made with PRCS to deploy community volunteers within their own communities to collect data to ensure that people felt comfortable and could answer without anxiety. Volunteers are a trusted source of information and all have vests and ID cards to identify their affiliation with PRCS. Volunteers at PRCS are familiar with the data collection tool and most already had it installed on their mobile phones.

We trained the volunteer enumerators in random sampling. However, a violent attack on Palestinians occurred in the West Bank the day that data collection was to begin. The communities’ collective anxiety greatly contributes to Palestinians’ collective apprehension of people outside of their community and this incident posed the same fear. In speaking with local volunteers after the incident, the team was informed that people were not answering their doors to those they did not know – thus greatly affecting our plan to conduct random sampling. The volunteers said that in communicating with other volunteers, the only way that data collection would occur at our anticipated scale and in the environment of fear and intimidation, is through the snowball method. We had a call with the programme lead regarding how to discuss the shift to non-random sampling. We prepared guidance to communicate to volunteers on how to use the snowball method to reduce bias. The programme lead explained that the selected staff and volunteers were well-versed in the method due to regular occurrences of violence against the population and the need to shift methods. When asked if a follow-up training and support were needed, we were informed that due to the halt on movement within select areas of the West Bank, a follow-up face-to-face training was not possible. The programme lead sent texts to Programme Coordinators about employing the snowball method who, in turn, shared with the volunteers.

Future studies’ perspectives include increasing data quality, for example, using triangulation of multiple methods [ 41 ] to avoid relying on only one data source whose feasibility could be jeopardized due to external circumstances. Another direction could be mobile technology, which could influence accessibility [ 42 ], if enumerators no longer need to interview respondents face-to-face. However, the feasibility of using such methods will depend on the available resources and the local context. As Axinn et al. concluded in their paper [ 43 ], data collection during armed conflict required tailoring to adapt to the circumstances.

This study shows that SJTs can be used to measure communities’ beliefs and norms about various topics, ranging from environmental pollution to violence and road safety. We hope that this step in developing a tool to capture local needs and priorities will help increase community engagement and ensure genuine local leadership.

Data availability

The datasets generated and analysed during the current study are available in the OSF repository, https://osf.io/p98s6/?view_only=fc0945d2ecfb40bbb64fff550b604fc6 .

N varies from the initial sample size due to missing data.

significance values adjusted by the Bonferroni correction to take into account multiple testing.

Abbreviations

Situational Judgement Tests

Community-Based Health and First Aid

Palestine Red Crescent Society

Occupied Palestinian territories

UN General Assembly. Report of the secretary-general on the outcome of the world humanitarian summit [Internet]. 2016 Aug. Report No.: A/71/353. http://undocs.org/A/71/353

Bedson J, Skrip LA, Pedi D, Abramowitz S, Carter S, Jalloh MF, et al. A review and agenda for integrated disease models including social and behavioural factors. Nat Hum Behav. 2021;5(7):834–46.

Article   PubMed   Google Scholar  

UNICEF. Minimum quality standards and indicators for community engagement. 2020.

Kuipers EHC, Desportes I, Hordijk M. Of locals and insiders: a localized humanitarian response to the 2017 mudslide in Mocoa. Colombia? Disaster Prev Manag Int J. 2019;29(3):352–64.

Article   Google Scholar  

Ramalingam B, Gray B, Cerruti G. Missed opportunities: the case for strengthening National and Local Partnership-based humanitarian responses. Johannesburg: Christian Aid, CAFOD, Oxfam FB, TearFund and ActionAid; 2013.

Google Scholar  

Bedford J, Butler N, Gercama I, Jones T, Jones L, Baggio O, et al. From words to action: towards a community-centred approach to preparedness and response in health emergencies. Geneva, Switzerland: International Federation of Red Cross and Red Crescent Societies; 2019.

Gizelis TI, Kosek KE. Why humanitarian interventions succeed or fail: the role of local participation. Coop Confl. 2005;40(4):363–83.

Bedson J, Jalloh MF, Pedi D, Bah S, Owen K, Oniba A, et al. Community engagement in outbreak response: lessons from the 2014–2016 Ebola outbreak in Sierra Leone. BMJ Glob Health. 2020;5(8):e002145.

Article   PubMed   PubMed Central   Google Scholar  

Baggio O. Real-time Ebola Community Feedback Mechanism. UNICEF, IDS and Anthrologica; 2020. (SSHAP Case Study 10).

Erlach E, Nichol B, Reader S, Baggio O. Using Community Feedback to Guide the COVID-19 response in Sub-saharan Africa: Red Cross and Red Crescent Approach and lessons learned from Ebola. Health Secur. 2021;19(1):13–20.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Colombo S, Pavignani E. Recurrent failings of medical humanitarianism: intractable, ignored, or just exaggerated? Lancet. 2017;390(10109):2314–24.

Colombo S, Checchi F. Decision-making in humanitarian crises: politics, and not only evidence, is the problem. Epidemiol Prev. 2018;42(3–4):214–25.

PubMed   Google Scholar  

Chyung SY, Kennedy M, Campbell I. Evidence-based Survey Design: the Use of Ascending or Descending Order of Likert-Type Response options. Perform Improv. 2018;57(9):9–16.

Friborg O, Martinussen M, Rosenvinge JH. Likert-based vs. semantic differential-based scorings of positive psychological constructs: a psychometric comparison of two versions of a scale measuring resilience. Personal Individ Differ. 2006;40(5):873–84.

Flaskerud JH. Cultural Bias and Likert-Type scales Revisited. Issues Ment Health Nurs. 2012;33(2):130–2.

Lipnevich AA, MacCann C, Krumm S, Burrus J, Roberts RD. Mathematics attitudes and mathematics outcomes of U.S. and Belarusian middle school students. J Educ Psychol. 2011;103(1):105–18.

He J, van de Vijver FJR. A general response style factor: evidence from a multi-ethnic study in the Netherlands. Personal Individ Differ. 2013;55(7):794–800.

Lee JW, Jones PS, Mineyama Y, Zhang XE. Cultural differences in responses to a likert scale. Res Nurs Health. 2002;25(4):295–306.

Moussaoui LS, Law E, Claxton N, Itämäki S, Siogope A, Virtanen H, et al. Sexual and Reproductive Health: how can situational Judgment tests help assess the norm and identify Target groups? A field study in Sierra Leone. Front Psychol. 2022;13:866551.

Webster ES, Paton LW, Crampton PES, Tiffin PA. Situational judgement test validity for selection: a systematic review and meta-analysis. Med Educ. 2020;54(10):888–902.

Harenbrock J, Forthmann B, Holling H. Retest reliability of situational Judgment tests: a Meta-analysis. J Pers Psychol. 2023;22(4):169–84.

Patterson F, Zibarras L, Ashworth V. Situational judgement tests in medical education and training: Research, theory and practice: AMEE Guide 100. Med Teach. 2016;38(1):3–17.

Lipnevich AA, MacCann C, Roberts RD. Assessing Non-Cognitive Constructs in Education: A Review of Traditional and Innovative Approaches. In: Saklofske DH, Reynolds CR, Schwean V, editors. The Oxford handbook of child psychological assessment [Internet]. Oxford University Press; 2013 [cited 2021 Jul 14]. http://oxfordhandbooks.com/view/ https://doi.org/10.1093/oxfordhb/9780199796304.001.0001/oxfordhb-9780199796304-e-033

OCHA. Occupied palestinian territory. Fragmented lives. Humanitarian overview 2016 [Internet]. East Jerusalem: United Nations Office for the Coordination of Humanitarian Affairs occupied Palestinian territory. 2017. https://www.ochaopt.org/content/fragmented-lives-humanitarian-overview-2016

Dubow EF, Boxer P, Huesmann LR, Shikaki K, Landau S, Gvirsman SD, et al. Exposure to conflict and violence across contexts: relations to Adjustment among Palestinian Children. J Clin Child Adolesc Psychol. 2009;39(1):103–16.

Clark CJ, Everson-Rose SA, Suglia SF, Btoush R, Alonso A, Haj-Yahia MM. Association between exposure to political violence and intimate-partner violence in the occupied Palestinian territory: a cross-sectional study. Lancet. 2010;375(9711):310–6.

Wagner G, Glick P, Khammash U. Exposure to violence and its relationship to mental health among young people in Palestine. East Mediterr Health J. 2020;26(2):189–97.

WHO. Global status report on violence prevention 2014 [Internet]. World Health Organization. 2014. https://apps.who.int/iris/handle/10665/145086

Global status report on. Road safety 2018: summary. Geneva: World Health Organization; 2018.

Chen S, Kuhn M, Prettner K, Bloom DE. The global macroeconomic burden of road injuries: estimates and projections for 166 countries. Lancet Planet Health. 2019;3(9):e390–8.

Sarraj YR. Behavior of road users in Gaza, Palestine. J Islam Univ Gaza. 2001;9(2):85–101.

Hassouna FMA, Abu-Eisheh S, Al-Sahili K. Analysis and modeling of Road Crash trends in Palestine. Arab J Sci Eng. 2020;45(10):8515–27.

Abu-Eisheh S, Kobari F. An Overview of Road Safety in the Palestinian Territories. In: Traffic And Transportation Studies (2002) [Internet]. Guilin, China: American Society of Civil Engineers; 2002 [cited 2023 Mar 29]. pp. 1063–70. https://doi.org/10.1061/40630%28255%29147

Kaza S, Yao LC, Bhada-Tata P, Van Woerden F. What a Waste 2.0: A Global Snapshot of Solid Waste Management to 2050. Washington, DC: World Bank; 2018. (Urban Development).

Book   Google Scholar  

Ferronato N, Torretta V. Waste Mismanagement in developing countries: a review of Global issues. Int J Environ Res Public Health. 2019;16(6):1060.

Al-Khatib IA, Arafat HA, Basheer T, Shawahneh H, Salahat A, Eid J, et al. Trends and problems of solid waste management in developing countries: a case study in seven Palestinian districts. Waste Manag. 2007;27(12):1910–9.

Al-Khatib IA, Arafat HA. A review of residential solid waste management in the occupied Palestinian territory: a window for improvement? Waste Manag Res J Sustain Circ Econ. 2010;28(6):481–8.

Di Maria F, Lovat E, Caniato M. WASTE MANAGEMENT IN DEVELOPED AND DEVELOPING COUNTRIES: THE CASE STUDY OF UMBRIA (ITALY) AND THE WEST BANK (PALESTINE). Detritus. 2018;In Press(1):1.

White S, Heath T, Mutula AC, Dreibelbis R, Palmer J. How are hygiene programmes designed in crises? Qualitative interviews with humanitarians in the Democratic Republic of the Congo and Iraq. Confl Health. 2022;16(1):45.

Chan JT. Snowball Sampling and Sample Selection in a Social Network. In: De Paula Á, Tamer E, Voia MC, editors. Advances in Econometrics [Internet]. Emerald Publishing Limited; 2020 [cited 2024 Jul 30]. pp. 61–80. https://www.emerald.com/insight/content/doi/ https://doi.org/10.1108/S0731-905320200000042008/full/html

Lewandowski GWJr, Strohmetz DB. Actions can speak as loud as words: measuring Behavior in Psychological Science. Soc Personal Psychol Compass. 2009;3(6):992–1002.

Roll K, Swenson G. Fieldwork after conflict: contextualising the challenges of access and data quality. Disasters. 2019;43(2):240–60.

Axinn WG, Ghimire D, Williams NE. Collecting survey data during armed conflict. J off Stat. 2012;28(2):153–71.

PubMed   PubMed Central   Google Scholar  

Download references

Acknowledgements

We thank the many people of the West Bank who agreed to be interviewed and allowed us to explore their workplaces, homes, clinics, schools and communities as part of this evaluation. We owe a special thanks to the support, contributions and endless answers to our questions posed to the hundreds of Palestine Red Crescent Society volunteers and community members across the West Bank. We are also deeply grateful to the volunteers across Palestine who provided insights, personal stories and inspiration.

Funding for the overall project was provided by ForumCiv through Swedish Red Cross. There was no specific funding allocated to the collection or analysis of this SJT data.

Open access funding provided by University of Geneva

Author information

Authors and affiliations.

Faculty of Psychology and Educational Sciences, Geneva University, Geneva, Switzerland

L. S. Moussaoui & O. Desrichard

Nadulpan LLC, Crestview, FL, USA

M. Quimby & N. Claxton

Swedish Red Cross Society, Stockholm, Sweden

H. Avancini

Palestine Red Crescent Society, al-Bireh (Ramallah and al-Bireh), Palestine

A. Salawdi, F. Skaik & R. Bani Odeh

You can also search for this author in PubMed   Google Scholar

Contributions

LM: Methodology, formal analysis, writing – original draft, writing – review & editing, visualization MQ: Writing – original draft, writing – review & editing HA: Methodology, investigation, resources, project administration AS: Methodology, investigation, writing – review & editing, project administration FS: Methodology, investigation, writing – review & editing, project administration RO: Resources OD: Resources, writing – review & editing, supervision NC: Conceptualization, methodology, investigation, writing – original draft, writing – review & editing, supervision, project administration.

Corresponding author

Correspondence to L. S. Moussaoui .

Ethics declarations

Ethics approval and consent to participate.

The data collected using Situational Judgement Tests was collected as part of a regular programme evaluation for the Strengthened Capacities for healthier, self-reliant and resilient Palestinian communities programme and implemented by the Palestine Red Crescent Society with the support of the Swedish Red Cross. As the community and volunteer surveys conducted were a planned part of regular programme monitoring and evaluation activities, ethics approval was not sought as it is not typically required for programme monitoring. The Palestine Red Crescent Society and Swedish Red Cross follow the basic principles of data protection for humanitarian organisations. For this survey, data collected is anonymized at the data collection point and the data subject is not identifiable (no names, IDs or other identifiable information is collected).

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Moussaoui, L.S., Quimby, M., Avancini, H. et al. Situational Judgement Tests among Palestinian community members and Red Crescent volunteers to inform humanitarian action: a cross-sectional study. Arch Public Health 82 , 141 (2024). https://doi.org/10.1186/s13690-024-01356-8

Download citation

Received : 05 June 2024

Accepted : 09 August 2024

Published : 27 August 2024

DOI : https://doi.org/10.1186/s13690-024-01356-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Situational judgment tests
  • Social norms
  • Waste mismanagement
  • Road accident

Archives of Public Health

ISSN: 2049-3258

cross coding qualitative research

IMAGES

  1. Three Levels Of Coding In Qualitative Research

    cross coding qualitative research

  2. Coding matrix for qualitative data

    cross coding qualitative research

  3. Coding Qualitative Data: A Beginner’s How-To + Examples

    cross coding qualitative research

  4. Coding Framework Qualitative Research

    cross coding qualitative research

  5. Coding and Memoing in Four Approaches to Qualitative Analysis

    cross coding qualitative research

  6. How to Analyze Qualitative Data from UX Research: Thematic Analysis

    cross coding qualitative research

VIDEO

  1. PR 1 Qualitative Data Analysis part 1- Coding

  2. Transcribing and Coding

  3. Preview of Cross Coding Medical Billing Seminar

  4. Cross-lingual Qualitative Research

  5. PR 1 Initial Coding

  6. Beyond Auto Survey Report: Qualitative Analysis, R code, Cross Table

COMMENTS

  1. Intercoder Reliability in Qualitative Research: Debates and Practical

    Evaluating the intercoder reliability (ICR) of a coding frame is frequently recommended as good practice in qualitative analysis. ICR is a somewhat controversial topic in the qualitative research community, with some arguing that it is an inappropriate or unnecessary step within the goals of qualitative analysis.

  2. Inter-Rater Reliability Methods in Qualitative Case Study Research

    Few articles on qualitative research methods in the literature conduct IRR assessments or neglect to report them, despite some disclosure of multiple researcher teams and coding reconciliation in the work. ... The authors referred to open, axial, and selective coding and within and cross-case analysis. Second, there is work that reviews the use ...

  3. Within-Case and Across-Case Approaches to Qualitative Data Analysis

    Qualitative data management strategies that depend solely on coding and sorting of texts into units of like meaning can strip much of this contextual richness away. To prevent this, some authors have recommended treating individual accounts as whole cases or stories, but whole cases are difficult to compare with one another when the goal of the ...

  4. PDF Qualitative Coding: An Approach to Assess Inter-Rater Reliability

    Abstract. When using qualitative coding techniques, establishing inter-rater reliability (IRR) is a recognized method of ensuring the trustworthiness of the study when multiple researchers are involved with coding. However, the process of manually determining IRR is not always fully explained within manuscripts or books.

  5. (PDF) Intercoder Reliability in Qualitative Research: Debates and

    2. Abstract. Evaluating the intercoder reliability (ICR) of a coding frame is frequently recommended as good practice in qualitative analysis. ICR. is a somewhat controversial topic in the ...

  6. Coding and Analysis Strategies

    Abstract. This chapter provides an overview of selected qualitative data analytic strategies with a particular focus on codes and coding. Preparatory strategies for a qualitative research study and data management are first outlined. Six coding methods are then profiled using comparable interview data: process coding, in vivo coding ...

  7. Full article: The use of intercoder reliability in qualitative

    1. Introduction. Qualitative interview is an important method in science education research because it can be used to explore students' understanding of scientific concepts (Cheung and Winterbottom Citation 2021; Tai; Citation Forthcoming) and teachers' knowledge for teaching science in an in-depth manner.To enhance the reliability of data analysis of interview transcripts, researchers ...

  8. Benefits to Qualitative Data Quality with Multiple ...

    While there are many benefits to coding qualitative data using multiple coders (as we did in this review), a key benefit is the ability to establish both validity and reliability (Church et al ...

  9. Journal of Rural Social Sciences

    qualitative research is lesser than quantitative (e.g. Leavy 2014). Qualitative analysis is difficult and complex, and should be acknowledged and reported as such. In qualitative analysis, text is read in context and then placed into meaningful categories (i.e. codes). Codes are the means through which data is interpreted and analyzed, and ...

  10. Coding qualitative data: a synthesis guiding the novice

    Having pooled our ex perience in coding qualitative material and teaching students how to. code, in this paper we synthesize the extensive literature on coding in the form of a hands-on. review ...

  11. Collaborative coding of qualitative data

    Since qualitative data has to be read through completely anyway, coders may as well work with all the topics and themes in the project. Enacting collaborative coding is a process to be managed all of it's own. Richards and Hemphill (2018) describe this in 6 stages, essentially planning the practicalities of how researchers will contribute ...

  12. What Is Consensus Coding and Split Coding in Qualitative Research

    Split coding and consensus coding are used by research teams to enhance the trustworthiness of their qualitative data analysis. These optional, collaborative coding methods are applied after an initial codebook is generated, helping to generate a "final edition" of the codebook. The goal of both methods is to offer a transparent audit trail ...

  13. Qualitative Data Coding 101 (With Examples)

    Step 1 - Initial coding. The first step of the coding process is to identify the essence of the text and code it accordingly. While there are various qualitative analysis software packages available, you can just as easily code textual data using Microsoft Word's "comments" feature.

  14. A Step-by-Step Process of Thematic Analysis to Develop a Conceptual

    In qualitative research, keywords are essential for creating codes that accurately reflect the underlying meaning of the data. ... The coding manual for qualitative researchers (4rd ed.). Sage Publications. Google Scholar. Sandelowski M. (1986). The problem of rigor in qualitative research. Advances in Nursing Science, 8(3), 27-37. https ...

  15. Guide to Qualitative Data Coding: Best Analysis Methods

    Description: Additional data accompanying qualitative sources, such as timestamps, location information, participant demographics, or categorizations. Advantages: Provides context, aids in organizing and filtering data, and supports comparative analysis across different segments or timeframes. 7. Historical Data.

  16. A Guide to Coding Qualitative Research Data

    The primary goal of coding qualitative data is to change data into a consistent format in support of research and reporting. A code can be a phrase or a word that depicts an idea or recurring theme in the data. The code's label must be intuitive and encapsulate the essence of the researcher's observations or participants' responses.

  17. Contextual Coding in Qualitative Research Involving Participants with

    data coding, qualitative research, research methods, cross-cultural research, multi-lingual research, contextual coding . Introduction . Qualitative research involves exploring and gaining insights into the experiences and perspectives of individuals about diverse phenomena related to their living world, culture, and

  18. Mastering Qualitative Coding: A Step-by-Step Guide for Research

    Speaker 1: In this video, we're going to dive into the topic of qualitative coding, which you'll need to understand if you plan to undertake qualitative analysis for any dissertation, thesis, or research project. We'll explain what exactly qualitative coding is, the different coding approaches and methods, and how to go about coding your data step by step.

  19. Structuring a Team-Based Approach to Coding Qualitative Data

    This technology allows us to work with a greater volume of data than ever before, but the increased volume of data frequently requires a large team to process and code. This paper presents insights on how to successfully structure and manage a team of staff in coding qualitative data. We draw on our experience in team-based coding of 154 ...

  20. Coding Code: Qualitative Methods for Investigating Data Science Skills

    1 Introduction. The year 2010 marked a turning point for research in Statistics Education. That year, the discipline saw the publication of the first discussion of the reflections a researcher must consider when designing a qualitative study (Groth Citation 2010).Where previously there were few studies, we now see a breadth of methods of investigation, from case studies (e.g., Findley Citation ...

  21. Mastering Process Coding: A Guide to Understanding Actions in

    Speaker 1: Qualitative coding and analysis are areas that often leave students feeling a little confused, but it doesn't have to be that way. In this video, we'll explore a popular inductive coding technique called process coding. We'll unpack what it is, which types of research aims it's well-suited to, and how to approach it, along with loads of practical examples.

  22. (PDF) Challenges in Coding Qualitative Data

    Point 2, a challenge. to that principle is that qualitative research aims to build research, to build theory, and in. the process to challenge established perspectives, because that's one of the ...

  23. 5 Common Mistakes to Avoid During Qualitative Research Analysis

    Join David and Alexandra on Grad Coach TV as they discuss the five most common mistakes students make during qualitative research analysis. Learn how to maintain alignment with your golden thread, the importance of transcription accuracy, and the necessity of choosing the right coding method. Perfect for anyone working on a dissertation, thesis, or research project.

  24. Understanding the experiences of transition for newly ...

    The research team consisted of academic and clinically based staff. The study adopted a qualitative design, influenced by Grounded theory, to gain in-depth understanding of participants' experiences (Savin-Baden and Major, 2013).

  25. The Living Codebook: Documenting the Process of Qualitative Data

    Transparency is once again a central issue of debate across types of qualitative research. Ethnographers focus on whether to name people, places, or to share data (Contreras 2019; Guenther 2009; Jerolmack and Murphy 2017; Reyes 2018b) and whether our data actually match the claims we make (e.g., Jerolmack and Khan 2014).Work on how to conduct qualitative data analysis, on the other hand, walks ...

  26. Cross-Cultural Research: Methods, Challenges, & Key Findings

    Cross-cultural research is vital for learning how people from different cultures think and act. It boosts communication and collaboration. ... Interviews provide in-depth qualitative data and allow researchers to explore individuals' experiences and perspectives in detail. They are particularly useful for understanding complex cultural phenomena.

  27. Situational Judgement Tests among Palestinian community members and Red

    A cross-sectional survey was conducted in January and February 2023 with 656 community members, and 239 Red Crescent volunteers. ... One example is Bedson et al. , who used quantitative (epidemiological) and qualitative data to develop Community-led Ebola Action approach. Qualitative data measured the most commonly expressed concerns, the ...

  28. Metatheme Analysis: A Qualitative Method for Cross-Cultural Research

    Cross-cultural ethnography has been an established method since the early 1900s (Boas, 1911; Kroeber, 1909), and has a century-long tradition of methodological innovation (Bernard, 2017; Ember, 2009).Early methodological research established procedures for cross-cultural surveys, sampling, and coding (Ember, 1971; Murdock, 1940; Naroll, 1965; Tylor, 1889).