• Privacy Policy

Research Method

Home » Questionnaire – Definition, Types, and Examples

Questionnaire – Definition, Types, and Examples

Table of Contents

Questionnaire

Questionnaire

Definition:

A Questionnaire is a research tool or survey instrument that consists of a set of questions or prompts designed to gather information from individuals or groups of people.

It is a standardized way of collecting data from a large number of people by asking them a series of questions related to a specific topic or research objective. The questions may be open-ended or closed-ended, and the responses can be quantitative or qualitative. Questionnaires are widely used in research, marketing, social sciences, healthcare, and many other fields to collect data and insights from a target population.

History of Questionnaire

The history of questionnaires can be traced back to the ancient Greeks, who used questionnaires as a means of assessing public opinion. However, the modern history of questionnaires began in the late 19th century with the rise of social surveys.

The first social survey was conducted in the United States in 1874 by Francis A. Walker, who used a questionnaire to collect data on labor conditions. In the early 20th century, questionnaires became a popular tool for conducting social research, particularly in the fields of sociology and psychology.

One of the most influential figures in the development of the questionnaire was the psychologist Raymond Cattell, who in the 1940s and 1950s developed the personality questionnaire, a standardized instrument for measuring personality traits. Cattell’s work helped establish the questionnaire as a key tool in personality research.

In the 1960s and 1970s, the use of questionnaires expanded into other fields, including market research, public opinion polling, and health surveys. With the rise of computer technology, questionnaires became easier and more cost-effective to administer, leading to their widespread use in research and business settings.

Today, questionnaires are used in a wide range of settings, including academic research, business, healthcare, and government. They continue to evolve as a research tool, with advances in computer technology and data analysis techniques making it easier to collect and analyze data from large numbers of participants.

Types of Questionnaire

Types of Questionnaires are as follows:

Structured Questionnaire

This type of questionnaire has a fixed format with predetermined questions that the respondent must answer. The questions are usually closed-ended, which means that the respondent must select a response from a list of options.

Unstructured Questionnaire

An unstructured questionnaire does not have a fixed format or predetermined questions. Instead, the interviewer or researcher can ask open-ended questions to the respondent and let them provide their own answers.

Open-ended Questionnaire

An open-ended questionnaire allows the respondent to answer the question in their own words, without any pre-determined response options. The questions usually start with phrases like “how,” “why,” or “what,” and encourage the respondent to provide more detailed and personalized answers.

Close-ended Questionnaire

In a closed-ended questionnaire, the respondent is given a set of predetermined response options to choose from. This type of questionnaire is easier to analyze and summarize, but may not provide as much insight into the respondent’s opinions or attitudes.

Mixed Questionnaire

A mixed questionnaire is a combination of open-ended and closed-ended questions. This type of questionnaire allows for more flexibility in terms of the questions that can be asked, and can provide both quantitative and qualitative data.

Pictorial Questionnaire:

In a pictorial questionnaire, instead of using words to ask questions, the questions are presented in the form of pictures, diagrams or images. This can be particularly useful for respondents who have low literacy skills, or for situations where language barriers exist. Pictorial questionnaires can also be useful in cross-cultural research where respondents may come from different language backgrounds.

Types of Questions in Questionnaire

The types of Questions in Questionnaire are as follows:

Multiple Choice Questions

These questions have several options for participants to choose from. They are useful for getting quantitative data and can be used to collect demographic information.

  • a. Red b . Blue c. Green d . Yellow

Rating Scale Questions

These questions ask participants to rate something on a scale (e.g. from 1 to 10). They are useful for measuring attitudes and opinions.

  • On a scale of 1 to 10, how likely are you to recommend this product to a friend?

Open-Ended Questions

These questions allow participants to answer in their own words and provide more in-depth and detailed responses. They are useful for getting qualitative data.

  • What do you think are the biggest challenges facing your community?

Likert Scale Questions

These questions ask participants to rate how much they agree or disagree with a statement. They are useful for measuring attitudes and opinions.

How strongly do you agree or disagree with the following statement:

“I enjoy exercising regularly.”

  • a . Strongly Agree
  • c . Neither Agree nor Disagree
  • d . Disagree
  • e . Strongly Disagree

Demographic Questions

These questions ask about the participant’s personal information such as age, gender, ethnicity, education level, etc. They are useful for segmenting the data and analyzing results by demographic groups.

  • What is your age?

Yes/No Questions

These questions only have two options: Yes or No. They are useful for getting simple, straightforward answers to a specific question.

Have you ever traveled outside of your home country?

Ranking Questions

These questions ask participants to rank several items in order of preference or importance. They are useful for measuring priorities or preferences.

Please rank the following factors in order of importance when choosing a restaurant:

  • a. Quality of Food
  • c. Ambiance
  • d. Location

Matrix Questions

These questions present a matrix or grid of options that participants can choose from. They are useful for getting data on multiple variables at once.

The product is easy to use
The product meets my needs
The product is affordable

Dichotomous Questions

These questions present two options that are opposite or contradictory. They are useful for measuring binary or polarized attitudes.

Do you support the death penalty?

How to Make a Questionnaire

Step-by-Step Guide for Making a Questionnaire:

  • Define your research objectives: Before you start creating questions, you need to define the purpose of your questionnaire and what you hope to achieve from the data you collect.
  • Choose the appropriate question types: Based on your research objectives, choose the appropriate question types to collect the data you need. Refer to the types of questions mentioned earlier for guidance.
  • Develop questions: Develop clear and concise questions that are easy for participants to understand. Avoid leading or biased questions that might influence the responses.
  • Organize questions: Organize questions in a logical and coherent order, starting with demographic questions followed by general questions, and ending with specific or sensitive questions.
  • Pilot the questionnaire : Test your questionnaire on a small group of participants to identify any flaws or issues with the questions or the format.
  • Refine the questionnaire : Based on feedback from the pilot, refine and revise the questionnaire as necessary to ensure that it is valid and reliable.
  • Distribute the questionnaire: Distribute the questionnaire to your target audience using a method that is appropriate for your research objectives, such as online surveys, email, or paper surveys.
  • Collect and analyze data: Collect the completed questionnaires and analyze the data using appropriate statistical methods. Draw conclusions from the data and use them to inform decision-making or further research.
  • Report findings: Present your findings in a clear and concise report, including a summary of the research objectives, methodology, key findings, and recommendations.

Questionnaire Administration Modes

There are several modes of questionnaire administration. The choice of mode depends on the research objectives, sample size, and available resources. Some common modes of administration include:

  • Self-administered paper questionnaires: Participants complete the questionnaire on paper, either in person or by mail. This mode is relatively low cost and easy to administer, but it may result in lower response rates and greater potential for errors in data entry.
  • Online questionnaires: Participants complete the questionnaire on a website or through email. This mode is convenient for both researchers and participants, as it allows for fast and easy data collection. However, it may be subject to issues such as low response rates, lack of internet access, and potential for fraudulent responses.
  • Telephone surveys: Trained interviewers administer the questionnaire over the phone. This mode allows for a large sample size and can result in higher response rates, but it is also more expensive and time-consuming than other modes.
  • Face-to-face interviews : Trained interviewers administer the questionnaire in person. This mode allows for a high degree of control over the survey environment and can result in higher response rates, but it is also more expensive and time-consuming than other modes.
  • Mixed-mode surveys: Researchers use a combination of two or more modes to administer the questionnaire, such as using online questionnaires for initial screening and following up with telephone interviews for more detailed information. This mode can help overcome some of the limitations of individual modes, but it requires careful planning and coordination.

Example of Questionnaire

Title of the Survey: Customer Satisfaction Survey

Introduction:

We appreciate your business and would like to ensure that we are meeting your needs. Please take a few minutes to complete this survey so that we can better understand your experience with our products and services. Your feedback is important to us and will help us improve our offerings.

Instructions:

Please read each question carefully and select the response that best reflects your experience. If you have any additional comments or suggestions, please feel free to include them in the space provided at the end of the survey.

1. How satisfied are you with our product quality?

  • Very satisfied
  • Somewhat satisfied
  • Somewhat dissatisfied
  • Very dissatisfied

2. How satisfied are you with our customer service?

3. How satisfied are you with the price of our products?

4. How likely are you to recommend our products to others?

  • Very likely
  • Somewhat likely
  • Somewhat unlikely
  • Very unlikely

5. How easy was it to find the information you were looking for on our website?

  • Somewhat easy
  • Somewhat difficult
  • Very difficult

6. How satisfied are you with the overall experience of using our products and services?

7. Is there anything that you would like to see us improve upon or change in the future?

…………………………………………………………………………………………………………………………..

Conclusion:

Thank you for taking the time to complete this survey. Your feedback is valuable to us and will help us improve our products and services. If you have any further comments or concerns, please do not hesitate to contact us.

Applications of Questionnaire

Some common applications of questionnaires include:

  • Research : Questionnaires are commonly used in research to gather information from participants about their attitudes, opinions, behaviors, and experiences. This information can then be analyzed and used to draw conclusions and make inferences.
  • Healthcare : In healthcare, questionnaires can be used to gather information about patients’ medical history, symptoms, and lifestyle habits. This information can help healthcare professionals diagnose and treat medical conditions more effectively.
  • Marketing : Questionnaires are commonly used in marketing to gather information about consumers’ preferences, buying habits, and opinions on products and services. This information can help businesses develop and market products more effectively.
  • Human Resources: Questionnaires are used in human resources to gather information from job applicants, employees, and managers about job satisfaction, performance, and workplace culture. This information can help organizations improve their hiring practices, employee retention, and organizational culture.
  • Education : Questionnaires are used in education to gather information from students, teachers, and parents about their perceptions of the educational experience. This information can help educators identify areas for improvement and develop more effective teaching strategies.

Purpose of Questionnaire

Some common purposes of questionnaires include:

  • To collect information on attitudes, opinions, and beliefs: Questionnaires can be used to gather information on people’s attitudes, opinions, and beliefs on a particular topic. For example, a questionnaire can be used to gather information on people’s opinions about a particular political issue.
  • To collect demographic information: Questionnaires can be used to collect demographic information such as age, gender, income, education level, and occupation. This information can be used to analyze trends and patterns in the data.
  • To measure behaviors or experiences: Questionnaires can be used to gather information on behaviors or experiences such as health-related behaviors or experiences, job satisfaction, or customer satisfaction.
  • To evaluate programs or interventions: Questionnaires can be used to evaluate the effectiveness of programs or interventions by gathering information on participants’ experiences, opinions, and behaviors.
  • To gather information for research: Questionnaires can be used to gather data for research purposes on a variety of topics.

When to use Questionnaire

Here are some situations when questionnaires might be used:

  • When you want to collect data from a large number of people: Questionnaires are useful when you want to collect data from a large number of people. They can be distributed to a wide audience and can be completed at the respondent’s convenience.
  • When you want to collect data on specific topics: Questionnaires are useful when you want to collect data on specific topics or research questions. They can be designed to ask specific questions and can be used to gather quantitative data that can be analyzed statistically.
  • When you want to compare responses across groups: Questionnaires are useful when you want to compare responses across different groups of people. For example, you might want to compare responses from men and women, or from people of different ages or educational backgrounds.
  • When you want to collect data anonymously: Questionnaires can be useful when you want to collect data anonymously. Respondents can complete the questionnaire without fear of judgment or repercussions, which can lead to more honest and accurate responses.
  • When you want to save time and resources: Questionnaires can be more efficient and cost-effective than other methods of data collection such as interviews or focus groups. They can be completed quickly and easily, and can be analyzed using software to save time and resources.

Characteristics of Questionnaire

Here are some of the characteristics of questionnaires:

  • Standardization : Questionnaires are standardized tools that ask the same questions in the same order to all respondents. This ensures that all respondents are answering the same questions and that the responses can be compared and analyzed.
  • Objectivity : Questionnaires are designed to be objective, meaning that they do not contain leading questions or bias that could influence the respondent’s answers.
  • Predefined responses: Questionnaires typically provide predefined response options for the respondents to choose from, which helps to standardize the responses and make them easier to analyze.
  • Quantitative data: Questionnaires are designed to collect quantitative data, meaning that they provide numerical or categorical data that can be analyzed using statistical methods.
  • Convenience : Questionnaires are convenient for both the researcher and the respondents. They can be distributed and completed at the respondent’s convenience and can be easily administered to a large number of people.
  • Anonymity : Questionnaires can be anonymous, which can encourage respondents to answer more honestly and provide more accurate data.
  • Reliability : Questionnaires are designed to be reliable, meaning that they produce consistent results when administered multiple times to the same group of people.
  • Validity : Questionnaires are designed to be valid, meaning that they measure what they are intended to measure and are not influenced by other factors.

Advantage of Questionnaire

Some Advantage of Questionnaire are as follows:

  • Standardization: Questionnaires allow researchers to ask the same questions to all participants in a standardized manner. This helps ensure consistency in the data collected and eliminates potential bias that might arise if questions were asked differently to different participants.
  • Efficiency: Questionnaires can be administered to a large number of people at once, making them an efficient way to collect data from a large sample.
  • Anonymity: Participants can remain anonymous when completing a questionnaire, which may make them more likely to answer honestly and openly.
  • Cost-effective: Questionnaires can be relatively inexpensive to administer compared to other research methods, such as interviews or focus groups.
  • Objectivity: Because questionnaires are typically designed to collect quantitative data, they can be analyzed objectively without the influence of the researcher’s subjective interpretation.
  • Flexibility: Questionnaires can be adapted to a wide range of research questions and can be used in various settings, including online surveys, mail surveys, or in-person interviews.

Limitations of Questionnaire

Limitations of Questionnaire are as follows:

  • Limited depth: Questionnaires are typically designed to collect quantitative data, which may not provide a complete understanding of the topic being studied. Questionnaires may miss important details and nuances that could be captured through other research methods, such as interviews or observations.
  • R esponse bias: Participants may not always answer questions truthfully or accurately, either because they do not remember or because they want to present themselves in a particular way. This can lead to response bias, which can affect the validity and reliability of the data collected.
  • Limited flexibility: While questionnaires can be adapted to a wide range of research questions, they may not be suitable for all types of research. For example, they may not be appropriate for studying complex phenomena or for exploring participants’ experiences and perceptions in-depth.
  • Limited context: Questionnaires typically do not provide a rich contextual understanding of the topic being studied. They may not capture the broader social, cultural, or historical factors that may influence participants’ responses.
  • Limited control : Researchers may not have control over how participants complete the questionnaire, which can lead to variations in response quality or consistency.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Applied Research

Applied Research – Types, Methods and Examples

Qualitative Research Methods

Qualitative Research Methods

Qualitative Research

Qualitative Research – Methods, Analysis Types...

Transformative Design

Transformative Design – Methods, Types, Guide

Ethnographic Research

Ethnographic Research -Types, Methods and Guide

Textual Analysis

Textual Analysis – Types, Examples and Guide

Questionnaire Method In Research

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

A questionnaire is a research instrument consisting of a series of questions for the purpose of gathering information from respondents. Questionnaires can be thought of as a kind of written interview . They can be carried out face to face, by telephone, computer, or post.

Questionnaires provide a relatively cheap, quick, and efficient way of obtaining large amounts of information from a large sample of people.

Questionnaire

Data can be collected relatively quickly because the researcher would not need to be present when completing the questionnaires. This is useful for large populations when interviews would be impractical.

However, a problem with questionnaires is that respondents may lie due to social desirability. Most people want to present a positive image of themselves, and may lie or bend the truth to look good, e.g., pupils exaggerate revision duration.

Questionnaires can effectively measure relatively large subjects’ behavior, attitudes, preferences, opinions, and intentions more cheaply and quickly than other methods.

Often, a questionnaire uses both open and closed questions to collect data. This is beneficial as it means both quantitative and qualitative data can be obtained.

Closed Questions

A closed-ended question requires a specific, limited response, often “yes” or “no” or a choice that fit into pre-decided categories.

Data that can be placed into a category is called nominal data. The category can be restricted to as few as two options, i.e., dichotomous (e.g., “yes” or “no,” “male” or “female”), or include quite complex lists of alternatives from which the respondent can choose (e.g., polytomous).

Closed questions can also provide ordinal data (which can be ranked). This often involves using a continuous rating scale to measure the strength of attitudes or emotions.

For example, strongly agree / agree / neutral / disagree / strongly disagree / unable to answer.

Closed questions have been used to research type A personality (e.g., Friedman & Rosenman, 1974) and also to assess life events that may cause stress (Holmes & Rahe, 1967) and attachment (Fraley, Waller, & Brennan, 2000).

  • They can be economical. This means they can provide large amounts of research data for relatively low costs. Therefore, a large sample size can be obtained, which should represent the population from which a researcher can then generalize.
  • The respondent provides information that can be easily converted into quantitative data (e.g., count the number of “yes” or “no” answers), allowing statistical analysis of the responses.
  • The questions are standardized. All respondents are asked exactly the same questions in the same order. This means a questionnaire can be replicated easily to check for reliability . Therefore, a second researcher can use the questionnaire to confirm consistent results.

Limitations

  • They lack detail. Because the responses are fixed, there is less scope for respondents to supply answers that reflect their true feelings on a topic.

Open Questions

Open questions allow for expansive, varied answers without preset options or limitations.

Open questions allow people to express what they think in their own words. Open-ended questions enable the respondent to answer in as much detail as they like in their own words. For example: “can you tell me how happy you feel right now?”

Open questions will work better if you want to gather more in-depth answers from your respondents. These give no pre-set answer options and instead, allow the respondents to put down exactly what they like in their own words.

Open questions are often used for complex questions that cannot be answered in a few simple categories but require more detail and discussion.

Lawrence Kohlberg presented his participants with moral dilemmas. One of the most famous concerns a character called Heinz, who is faced with the choice between watching his wife die of cancer or stealing the only drug that could help her.

Participants were asked whether Heinz should steal the drug or not and, more importantly, for their reasons why upholding or breaking the law is right.

  • Rich qualitative data is obtained as open questions allow respondents to elaborate on their answers. This means the research can determine why a person holds a certain attitude .
  • Time-consuming to collect the data. It takes longer for the respondent to complete open questions. This is a problem as a smaller sample size may be obtained.
  • Time-consuming to analyze the data. It takes longer for the researcher to analyze qualitative data as they have to read the answers and try to put them into categories by coding, which is often subjective and difficult. However, Smith (1992) has devoted an entire book to the issues of thematic content analysis that includes 14 different scoring systems for open-ended questions.
  • Not suitable for less educated respondents as open questions require superior writing skills and a better ability to express one’s feelings verbally.

Questionnaire Design

With some questionnaires suffering from a response rate as low as 5%, a questionnaire must be well designed.

There are several important factors in questionnaire design.

Pilot Study

Question order.

Questions should progress logically from the least sensitive to the most sensitive, from the factual and behavioral to the cognitive, and from the more general to the more specific.

The researcher should ensure that previous questions do not influence the answer to a question.

Question order effects

  • Question order effects occur when responses to an earlier question affect responses to a later question in a survey. They can arise at different stages of the survey response process – interpretation, information retrieval, judgment/estimation, and reporting.
  • Types of question order effects include: unconditional (subsequent answers affected by prior question topic), conditional (subsequent answers depend on the response to the prior question), and associational (correlation between two questions changes based on order).
  • Question order effects have been found across different survey topics like social and political attitudes, health and safety studies, vignette research, etc. Effects may be moderated by respondent factors like age, education level, knowledge and attitudes about the topic.
  • To minimize question order effects, recommendations include avoiding judgmental dependencies between questions, separating potentially reactive questions, randomizing questions, following good survey design principles, considering respondent characteristics, and intentionally examining question context and order.

Terminology

  • There should be a minimum of technical jargon. Questions should be simple, to the point, and easy to understand. The language of a questionnaire should be appropriate to the vocabulary of the group of people being studied.
  • Use statements that are interpreted in the same way by members of different subpopulations of the population of interest.
  • For example, the researcher must change the language of questions to match the social background of the respondent’s age / educational level / social class/ethnicity, etc.

Presentation

Ethical issues.

  • The researcher must ensure that the information provided by the respondent is kept confidential, e.g., name, address, etc.
  • This means questionnaires are good for researching sensitive topics as respondents will be more honest when they cannot be identified.
  • Keeping the questionnaire confidential should also reduce the likelihood of psychological harm, such as embarrassment.
  • Participants must provide informed consent before completing the questionnaire and must be aware that they have the right to withdraw their information at any time during the survey/ study.

Problems with Postal Questionnaires

At first sight, the postal questionnaire seems to offer the opportunity to get around the problem of interview bias by reducing the personal involvement of the researcher. Its other practical advantages are that it is cheaper than face-to-face interviews and can quickly contact many respondents scattered over a wide area.

However, these advantages must be weighed against the practical problems of conducting research by post. A lack of involvement by the researcher means there is little control over the information-gathering process.

The data might not be valid (i.e., truthful) as we can never be sure that the questionnaire was completed by the person to whom it was addressed.

That, of course, assumes there is a reply in the first place, and one of the most intractable problems of mailed questionnaires is a low response rate. This diminishes the reliability of the data

Also, postal questionnaires may not represent the population they are studying. This may be because:

  • Some questionnaires may be lost in the post, reducing the sample size.
  • The questionnaire may be completed by someone not a member of the research population.
  • Those with strong views on the questionnaire’s subject are more likely to complete it than those without interest.

Benefits of a Pilot Study

A pilot study is a practice / small-scale study conducted before the main study.

It allows the researcher to try out the study with a few participants so that adjustments can be made before the main study, saving time and money.

It is important to conduct a questionnaire pilot study for the following reasons:

  • Check that respondents understand the terminology used in the questionnaire.
  • Check that emotive questions are not used, as they make people defensive and could invalidate their answers.
  • Check that leading questions have not been used as they could bias the respondent’s answer.
  • Ensure the questionnaire can be completed in an appropriate time frame (i.e., it’s not too long).

Frequently Asked Questions 

How do psychological researchers analyze the data collected from questionnaires.

Psychological researchers analyze questionnaire data by looking for patterns and trends in people’s responses. They use numbers and charts to summarize the information.

They calculate things like averages and percentages to see what most people think or feel. They also compare different groups to see if there are any differences between them.

By doing these analyses, researchers can understand how people think, feel, and behave. This helps them make conclusions and learn more about how our minds work.

Are questionnaires effective in gathering accurate data?

Yes, questionnaires can be effective in gathering accurate data. When designed well, with clear and understandable questions, they allow individuals to express their thoughts, opinions, and experiences.

However, the accuracy of the data depends on factors such as the honesty and accuracy of respondents’ answers, their understanding of the questions, and their willingness to provide accurate information. Researchers strive to create reliable and valid questionnaires to minimize biases and errors.

It’s important to remember that while questionnaires can provide valuable insights, they are just one tool among many used in psychological research.

Can questionnaires be used with diverse populations and cultural contexts?

Yes, questionnaires can be used with diverse populations and cultural contexts. Researchers take special care to ensure that questionnaires are culturally sensitive and appropriate for different groups.

This means adapting the language, examples, and concepts to match the cultural context. By doing so, questionnaires can capture the unique perspectives and experiences of individuals from various backgrounds.

This helps researchers gain a more comprehensive understanding of human behavior and ensures that everyone’s voice is heard and represented in psychological research.

Are questionnaires the only method used in psychological research?

No, questionnaires are not the only method used in psychological research. Psychologists use a variety of research methods, including interviews, observations , experiments , and psychological tests.

Each method has its strengths and limitations, and researchers choose the most appropriate method based on their research question and goals.

Questionnaires are valuable for gathering self-report data, but other methods allow researchers to directly observe behavior, study interactions, or manipulate variables to test hypotheses.

By using multiple methods, psychologists can gain a more comprehensive understanding of human behavior and mental processes.

What is a semantic differential scale?

The semantic differential scale is a questionnaire format used to gather data on individuals’ attitudes or perceptions. It’s commonly incorporated into larger surveys or questionnaires to assess subjective qualities or feelings about a specific topic, product, or concept by quantifying them on a scale between two bipolar adjectives.

It presents respondents with a pair of opposite adjectives (e.g., “happy” vs. “sad”) and asks them to mark their position on a scale between them, capturing the intensity of their feelings about a particular subject.

It quantifies subjective qualities, turning them into data that can be statistically analyzed.

Ayidiya, S. A., & McClendon, M. J. (1990). Response effects in mail surveys. Public Opinion Quarterly, 54 (2), 229–247. https://doi.org/10.1086/269200

Fraley, R. C., Waller, N. G., & Brennan, K. A. (2000). An item-response theory analysis of self-report measures of adult attachment. Journal of Personality and Social Psychology, 78, 350-365.

Friedman, M., & Rosenman, R. H. (1974). Type A behavior and your heart . New York: Knopf.

Gold, R. S., & Barclay, A. (2006). Order of question presentation and correlation between judgments of comparative and own risk. Psychological Reports, 99 (3), 794–798. https://doi.org/10.2466/PR0.99.3.794-798

Holmes, T. H., & Rahe, R. H. (1967). The social readjustment rating scale. Journal of psychosomatic research, 11(2) , 213-218.

Schwarz, N., & Hippler, H.-J. (1995). Subsequent questions may influence answers to preceding questions in mail surveys. Public Opinion Quarterly, 59 (1), 93–97. https://doi.org/10.1086/269460

Smith, C. P. (Ed.). (1992). Motivation and personality: Handbook of thematic content analysis . Cambridge University Press.

Further Information

  • Questionnaire design and scale development
  • Questionnaire Appraisal Form

Print Friendly, PDF & Email

Enago Academy

How to Design Effective Research Questionnaires for Robust Findings

' src=

As a staple in data collection, questionnaires help uncover robust and reliable findings that can transform industries, shape policies, and revolutionize understanding. Whether you are exploring societal trends or delving into scientific phenomena, the effectiveness of your research questionnaire can make or break your findings.

In this article, we aim to understand the core purpose of questionnaires, exploring how they serve as essential tools for gathering systematic data, both qualitative and quantitative, from diverse respondents. Read on as we explore the key elements that make up a winning questionnaire, the art of framing questions which are both compelling and rigorous, and the careful balance between simplicity and depth.

Table of Contents

The Role of Questionnaires in Research

So, what is a questionnaire? A questionnaire is a structured set of questions designed to collect information, opinions, attitudes, or behaviors from respondents. It is one of the most commonly used data collection methods in research. Moreover, questionnaires can be used in various research fields, including social sciences, market research, healthcare, education, and psychology. Their adaptability makes them suitable for investigating diverse research questions.

Questionnaire and survey  are two terms often used interchangeably, but they have distinct meanings in the context of research. A survey refers to the broader process of data collection that may involve various methods. A survey can encompass different data collection techniques, such as interviews , focus groups, observations, and yes, questionnaires.

Pros and Cons of Using Questionnaires in Research:

While questionnaires offer numerous advantages in research, they also come with some disadvantages that researchers must be aware of and address appropriately. Careful questionnaire design, validation, and consideration of potential biases can help mitigate these disadvantages and enhance the effectiveness of using questionnaires as a data collection method.

research studies questionnaires

Structured vs Unstructured Questionnaires

Structured questionnaire:.

A structured questionnaire consists of questions with predefined response options. Respondents are presented with a fixed set of choices and are required to select from those options. The questions in a structured questionnaire are designed to elicit specific and quantifiable responses. Structured questionnaires are particularly useful for collecting quantitative data and are often employed in surveys and studies where standardized and comparable data are necessary.

Advantages of Structured Questionnaires:

  • Easy to analyze and interpret: The fixed response options facilitate straightforward data analysis and comparison across respondents.
  • Efficient for large-scale data collection: Structured questionnaires are time-efficient, allowing researchers to collect data from a large number of respondents.
  • Reduces response bias: The predefined response options minimize potential response bias and maintain consistency in data collection.

Limitations of Structured Questionnaires:

  • Lack of depth: Structured questionnaires may not capture in-depth insights or nuances as respondents are limited to pre-defined response choices. Hence, they may not reveal the reasons behind respondents’ choices, limiting the understanding of their perspectives.
  • Limited flexibility: The fixed response options may not cover all potential responses, therefore, potentially restricting respondents’ answers.

Unstructured Questionnaire:

An unstructured questionnaire consists of questions that allow respondents to provide detailed and unrestricted responses. Unlike structured questionnaires, there are no predefined response options, giving respondents the freedom to express their thoughts in their own words. Furthermore, unstructured questionnaires are valuable for collecting qualitative data and obtaining in-depth insights into respondents’ experiences, opinions, or feelings.

Advantages of Unstructured Questionnaires:

  • Rich qualitative data: Unstructured questionnaires yield detailed and comprehensive qualitative data, providing valuable and novel insights into respondents’ perspectives.
  • Flexibility in responses: Respondents have the freedom to express themselves in their own words. Hence, allowing for a wide range of responses.

Limitations of Unstructured Questionnaires:

  • Time-consuming analysis: Analyzing open-ended responses can be time-consuming, since, each response requires careful reading and interpretation.
  • Subjectivity in interpretation: The analysis of open-ended responses may be subjective, as researchers interpret and categorize responses based on their judgment.
  • May require smaller sample size: Due to the depth of responses, researchers may need a smaller sample size for comprehensive analysis, making generalizations more challenging.

Types of Questions in a Questionnaire

In a questionnaire, researchers typically use the following most common types of questions to gather a variety of information from respondents:

1. Open-Ended Questions:

These questions allow respondents to provide detailed and unrestricted responses in their own words. Open-ended questions are valuable for gathering qualitative data and in-depth insights.

Example: What suggestions do you have for improving our product?

2. Multiple-Choice Questions

Respondents choose one answer from a list of provided options. This type of question is suitable for gathering categorical data or preferences.

Example: Which of the following social media/academic networking platforms do you use to promote your research?

  • ResearchGate
  • Academia.edu

3. Dichotomous Questions

Respondents choose between two options, typically “yes” or “no”, “true” or “false”, or “agree” or “disagree”.

Example: Have you ever published in open access journals before?

4. Scaling Questions

These questions, also known as rating scale questions, use a predefined scale that allows respondents to rate or rank their level of agreement, satisfaction, importance, or other subjective assessments. These scales help researchers quantify subjective data and make comparisons across respondents.

There are several types of scaling techniques used in scaling questions:

i. Likert Scale:

The Likert scale is one of the most common scaling techniques. It presents respondents with a series of statements and asks them to rate their level of agreement or disagreement using a range of options, typically from “strongly agree” to “strongly disagree”.For example: Please indicate your level of agreement with the statement: “The content presented in the webinar was relevant and aligned with the advertised topic.”

  • Strongly Agree
  • Strongly Disagree

ii. Semantic Differential Scale:

The semantic differential scale measures respondents’ perceptions or attitudes towards an item using opposite adjectives or bipolar words. Respondents rate the item on a scale between the two opposites. For example:

  • Easy —— Difficult
  • Satisfied —— Unsatisfied
  • Very likely —— Very unlikely

iii. Numerical Rating Scale:

This scale requires respondents to provide a numerical rating on a predefined scale. It can be a simple 1 to 5 or 1 to 10 scale, where higher numbers indicate higher agreement, satisfaction, or importance.

iv. Ranking Questions:

Respondents rank items in order of preference or importance. Ranking questions help identify preferences or priorities.

Example: Please rank the following features of our app in order of importance (1 = Most Important, 5 = Least Important):

  • User Interface
  • Functionality
  • Customer Support

By using a mix of question types, researchers can gather both quantitative and qualitative data, providing a comprehensive understanding of the research topic and enabling meaningful analysis and interpretation of the results. The choice of question types depends on the research objectives , the desired depth of information, and the data analysis requirements.

Methods of Administering Questionnaires

There are several methods for administering questionnaires, and the choice of method depends on factors such as the target population, research objectives , convenience, and resources available. Here are some common methods of administering questionnaires:

research studies questionnaires

Each method has its advantages and limitations. Online surveys offer convenience and a large reach, but they may be limited to individuals with internet access. Face-to-face interviews allow for in-depth responses but can be time-consuming and costly. Telephone surveys have broad reach but may be limited by declining response rates. Researchers should choose the method that best suits their research objectives, target population, and available resources to ensure successful data collection.

How to Design a Questionnaire

Designing a good questionnaire is crucial for gathering accurate and meaningful data that aligns with your research objectives. Here are essential steps and tips to create a well-designed questionnaire:

research studies questionnaires

1. Define Your Research Objectives : Clearly outline the purpose and specific information you aim to gather through the questionnaire.

2. Identify Your Target Audience : Understand respondents’ characteristics and tailor the questionnaire accordingly.

3. Develop the Questions :

  • Write Clear and Concise Questions
  • Avoid Leading or Biasing Questions
  • Sequence Questions Logically
  • Group Related Questions
  • Include Demographic Questions

4. Provide Well-defined Response Options : Offer exhaustive response choices for closed-ended questions.

5. Consider Skip Logic and Branching : Customize the questionnaire based on previous answers.

6. Pilot Test the Questionnaire : Identify and address issues through a pilot study .

7. Seek Expert Feedback : Validate the questionnaire with subject matter experts.

8. Obtain Ethical Approval : Comply with ethical guidelines , obtain consent, and ensure confidentiality before administering the questionnaire.

9. Administer the Questionnaire : Choose the right mode and provide clear instructions.

10. Test the Survey Platform : Ensure compatibility and usability for online surveys.

By following these steps and paying attention to questionnaire design principles, you can create a well-structured and effective questionnaire that gathers reliable data and helps you achieve your research objectives.

Characteristics of a Good Questionnaire

A good questionnaire possesses several essential elements that contribute to its effectiveness. Furthermore, these characteristics ensure that the questionnaire is well-designed, easy to understand, and capable of providing valuable insights. Here are some key characteristics of a good questionnaire:

1. Clarity and Simplicity : Questions should be clear, concise, and unambiguous. Avoid using complex language or technical terms that may confuse respondents. Simple and straightforward questions ensure that respondents interpret them consistently.

2. Relevance and Focus : Each question should directly relate to the research objectives and contribute to answering the research questions. Consequently, avoid including extraneous or irrelevant questions that could lead to data clutter.

3. Mix of Question Types : Utilize a mix of question types, including open-ended, Likert scale, and multiple-choice questions. This variety allows for both qualitative and quantitative data collections .

4. Validity and Reliability : Ensure the questionnaire measures what it intends to measure (validity) and produces consistent results upon repeated administration (reliability). Validation should be conducted through expert review and previous research.

5. Appropriate Length : Keep the questionnaire’s length appropriate and manageable to avoid respondent fatigue or dropouts. Long questionnaires may result in incomplete or rushed responses.

6. Clear Instructions : Include clear instructions at the beginning of the questionnaire to guide respondents on how to complete it. Explain any technical terms, formats, or concepts if necessary.

7. User-Friendly Format : Design the questionnaire to be visually appealing and user-friendly. Use consistent formatting, adequate spacing, and a logical page layout.

8. Data Validation and Cleaning : Incorporate validation checks to ensure data accuracy and reliability. Consider mechanisms to detect and correct inconsistent or missing responses during data cleaning.

By incorporating these characteristics, researchers can create a questionnaire that maximizes data quality, minimizes response bias, and provides valuable insights for their research.

In the pursuit of advancing research and gaining meaningful insights, investing time and effort into designing effective questionnaires is a crucial step. A well-designed questionnaire is more than a mere set of questions; it is a masterpiece of precision and ingenuity. Each question plays a vital role in shaping the narrative of our research, guiding us through the labyrinth of data to meaningful conclusions. Indeed, a well-designed questionnaire serves as a powerful tool for unlocking valuable insights and generating robust findings that impact society positively.

Have you ever designed a research questionnaire? Reflect on your experience and share your insights with researchers globally through Enago Academy’s Open Blogging Platform . Join our diverse community of 1000K+ researchers and authors to exchange ideas, strategies, and best practices, and together, let’s shape the future of data collection and maximize the impact of questionnaires in the ever-evolving landscape of research.

Frequently Asked Questions

A research questionnaire is a structured tool used to gather data from participants in a systematic manner. It consists of a series of carefully crafted questions designed to collect specific information related to a research study.

Questionnaires play a pivotal role in both quantitative and qualitative research, enabling researchers to collect insights, opinions, attitudes, or behaviors from respondents. This aids in hypothesis testing, understanding, and informed decision-making, ensuring consistency, efficiency, and facilitating comparisons.

Questionnaires are a versatile tool employed in various research designs to gather data efficiently and comprehensively. They find extensive use in both quantitative and qualitative research methodologies, making them a fundamental component of research across disciplines. Some research designs that commonly utilize questionnaires include: a) Cross-Sectional Studies b) Longitudinal Studies c) Descriptive Research d) Correlational Studies e) Causal-Comparative Studies f) Experimental Research g) Survey Research h) Case Studies i) Exploratory Research

A survey is a comprehensive data collection method that can include various techniques like interviews and observations. A questionnaire is a specific set of structured questions within a survey designed to gather standardized responses. While a survey is a broader approach, a questionnaire is a focused tool for collecting specific data.

The choice of questionnaire type depends on the research objectives, the type of data required, and the preferences of respondents. Some common types include: • Structured Questionnaires: These questionnaires consist of predefined, closed-ended questions with fixed response options. They are easy to analyze and suitable for quantitative research. • Semi-Structured Questionnaires: These questionnaires combine closed-ended questions with open-ended ones. They offer more flexibility for respondents to provide detailed explanations. • Unstructured Questionnaires: These questionnaires contain open-ended questions only, allowing respondents to express their thoughts and opinions freely. They are commonly used in qualitative research.

Following these steps ensures effective questionnaire administration for reliable data collection: • Choose a Method: Decide on online, face-to-face, mail, or phone administration. • Online Surveys: Use platforms like SurveyMonkey • Pilot Test: Test on a small group before full deployment • Clear Instructions: Provide concise guidelines • Follow-Up: Send reminders if needed

' src=

Thank you, Riya. This is quite helpful. As discussed, response bias is one of the disadvantages in the use of questionnaires. One way to help limit this can be to use scenario based questions. These type of questions may help the respondents to be more reflective and active in the process.

Thank you, Dear Riya. This is quite helpful.

Great insights there Doc

Rate this article Cancel Reply

Your email address will not be published.

research studies questionnaires

Enago Academy's Most Popular Articles

Graphical Abstracts vs. Infographics: Best Practices for Visuals - Enago

  • Promoting Research

Graphical Abstracts Vs. Infographics: Best practices for using visual illustrations for increased research impact

Dr. Sarah Chen stared at her computer screen, her eyes staring at her recently published…

10 Tips to Prevent Research Papers From Being Retracted - Enago

  • Publishing Research

10 Tips to Prevent Research Papers From Being Retracted

Research paper retractions represent a critical event in the scientific community. When a published article…

2024 Scholar Metrics: Unveiling research impact (2019-2023)

  • Industry News

Google Releases 2024 Scholar Metrics, Evaluates Impact of Scholarly Articles

Google has released its 2024 Scholar Metrics, assessing scholarly articles from 2019 to 2023. This…

What is Academic Integrity and How to Uphold it [FREE CHECKLIST]

Ensuring Academic Integrity and Transparency in Academic Research: A comprehensive checklist for researchers

Academic integrity is the foundation upon which the credibility and value of scientific findings are…

7 Step Guide for Optimizing Impactful Research Process

  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

Research Recommendations – Guiding policy-makers for evidence-based decision making

research studies questionnaires

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

  • AI in Academia
  • Career Corner
  • Diversity and Inclusion
  • Infographics
  • Expert Video Library
  • Other Resources
  • Enago Learn
  • Upcoming & On-Demand Webinars
  • Peer Review Week 2024
  • Open Access Week 2023
  • Conference Videos
  • Enago Report
  • Journal Finder
  • Enago Plagiarism & AI Grammar Check
  • Editing Services
  • Publication Support Services
  • Research Impact
  • Translation Services
  • Publication solutions
  • AI-Based Solutions
  • Thought Leadership
  • Call for Articles
  • Call for Speakers
  • Author Training
  • Edit Profile

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

research studies questionnaires

In your opinion, what is the most effective way to improve integrity in the peer review process?

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • Product Demos
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence

Market Research

  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO

research studies questionnaires

Your ultimate guide to questionnaires and how to design a good one

The written questionnaire is the heart and soul of any survey research project. Whether you conduct your survey using an online questionnaire, in person, by email or over the phone, the way you design your questionnaire plays a critical role in shaping the quality of the data and insights that you’ll get from your target audience. Keep reading to get actionable tips.

What is a questionnaire?

A questionnaire is a research tool consisting of a set of questions or other ‘prompts’ to collect data from a set of respondents.

When used in most research, a questionnaire will consist of a number of types of questions (primarily open-ended and closed) in order to gain both quantitative data that can be analyzed to draw conclusions, and qualitative data to provide longer, more specific explanations.

A research questionnaire is often mistaken for a survey - and many people use the term questionnaire and survey, interchangeably.

But that’s incorrect.

Which is what we talk about next.

Get started with our free survey maker with 50+ templates

Survey vs. questionnaire – what’s the difference?

Before we go too much further, let’s consider the differences between surveys and questionnaires.

These two terms are often used interchangeably, but there is an important difference between them.

Survey definition

A survey is the process of collecting data from a set of respondents and using it to gather insights.

Survey research can be conducted using a questionnaire, but won’t always involve one.

Questionnaire definition

A questionnaire is the list of questions you circulate to your target audience.

In other words, the survey is the task you’re carrying out, and the questionnaire is the instrument you’re using to do it.

By itself, a questionnaire doesn’t achieve much.

It’s when you put it into action as part of a survey that you start to get results.

Advantages vs disadvantages of using a questionnaire

While a questionnaire is a popular method to gather data for market research or other studies, there are a few disadvantages to using this method (although there are plenty of advantages to using a questionnaire too).

Let’s have a look at some of the advantages and disadvantages of using a questionnaire for collecting data.

Advantages of using a questionnaire

1. questionnaires are relatively cheap.

Depending on the complexity of your study, using a questionnaire can be cost effective compared to other methods.

You simply need to write your survey questionnaire, and send it out and then process the responses.

You can set up an online questionnaire relatively easily, or simply carry out market research on the street if that’s the best method.

2. You can get and analyze results quickly

Again depending on the size of your survey you can get results back from a questionnaire quickly, often within 24 hours of putting the questionnaire live.

It also means you can start to analyze responses quickly too.

3. They’re easily scalable

You can easily send an online questionnaire to anyone in the world and with the right software you can quickly identify your target audience and your questionnaire to them.

4. Questionnaires are easy to analyze

If your questionnaire design has been done properly, it’s quick and easy to analyze results from questionnaires once responses start to come back.

This is particularly useful with large scale market research projects.

Because all respondents are answering the same questions, it’s simple to identify trends.

5. You can use the results to make accurate decisions

As a research instrument, a questionnaire is ideal for commercial research because the data you get back is from your target audience (or ideal customers) and the information you get back on their thoughts, preferences or behaviors allows you to make business decisions.

6. A questionnaire can cover any topic

One of the biggest advantages of using questionnaires when conducting research is (because you can adapt them using different types and styles of open ended questions and closed ended questions) they can be used to gather data on almost any topic.

There are many types of questionnaires you can design to gather both quantitative data and qualitative data - so they’re a useful tool for all kinds of data analysis.

Disadvantages of using a questionnaire

1. respondents could lie.

This is by far the biggest risk with a questionnaire, especially when dealing with sensitive topics.

Rather than give their actual opinion, a respondent might feel pressured to give the answer they deem more socially acceptable, which doesn’t give you accurate results.

2. Respondents might not answer every question

There are all kinds of reasons respondents might not answer every question, from questionnaire length, they might not understand what’s being asked, or they simply might not want to answer it.

If you get questionnaires back without complete responses it could negatively affect your research data and provide an inaccurate picture.

3. They might interpret what’s being asked incorrectly

This is a particular problem when running a survey across geographical boundaries and often comes down to the design of the survey questionnaire.

If your questions aren’t written in a very clear way, the respondent might misunderstand what’s being asked and provide an answer that doesn’t reflect what they actually think.

Again this can negatively affect your research data.

4. You could introduce bias

The whole point of producing a questionnaire is to gather accurate data from which decisions can be made or conclusions drawn.

But the data collected can be heavily impacted if the researchers accidentally introduce bias into the questions.

This can be easily done if the researcher is trying to prove a certain hypothesis with their questionnaire, and unwittingly write questions that push people towards giving a certain answer.

In these cases respondents’ answers won’t accurately reflect what is really happening and stop you gathering more accurate data.

5. Respondents could get survey fatigue

One issue you can run into when sending out a questionnaire, particularly if you send them out regularly to the same survey sample, is that your respondents could start to suffer from survey fatigue.

In these circumstances, rather than thinking about the response options in the questionnaire and providing accurate answers, respondents could start to just tick boxes to get through the questionnaire quickly.

Again, this won’t give you an accurate data set.

Questionnaire design: How to do it

It’s essential to carefully craft a questionnaire to reduce survey error and optimize your data . The best way to think about the questionnaire is with the end result in mind.

How do you do that?

Start with questions, like:

  • What is my research purpose ?
  • What data do I need?
  • How am I going to analyze that data?
  • What questions are needed to best suit these variables?

Once you have a clear idea of the purpose of your survey, you’ll be in a better position to create an effective questionnaire.

Here are a few steps to help you get into the right mindset.

1. Keep the respondent front and center

A survey is the process of collecting information from people, so it needs to be designed around human beings first and foremost.

In his post about survey design theory, David Vannette, PhD, from the Qualtrics Methodology Lab explains the correlation between the way a survey is designed and the quality of data that is extracted.

“To begin designing an effective survey, take a step back and try to understand what goes on in your respondents’ heads when they are taking your survey.

This step is critical to making sure that your questionnaire makes it as likely as possible that the response process follows that expected path.”

From writing the questions to designing the survey flow, the respondent’s point of view should always be front and center in your mind during a questionnaire design.

2. How to write survey questions

Your questionnaire should only be as long as it needs to be, and every question needs to deliver value.

That means your questions must each have an individual purpose and produce the best possible data for that purpose, all while supporting the overall goal of the survey.

A question must also must be phrased in a way that is easy for all your respondents to understand, and does not produce false results.

To do this, remember the following principles:

Get into the respondent's head

The process for a respondent answering a survey question looks like this:

  • The respondent reads the question and determines what information they need to answer it.
  • They search their memory for that information.
  • They make judgments about that information.
  • They translate that judgment into one of the answer options you’ve provided. This is the process of taking the data they have and matching that information with the question that’s asked.

When wording questions, make sure the question means the same thing to all respondents. Words should have one meaning, few syllables, and the sentences should have few words.

Only use the words needed to ask your question and not a word more .

Note that it’s important that the respondent understands the intent behind your question.

If they don’t, they may answer a different question and the data can be skewed.

Some contextual help text, either in the introduction to the questionnaire or before the question itself, can help make sure the respondent understands your goals and the scope of your research.

Use mutually exclusive responses

Be sure to make your response categories mutually exclusive.

Consider the question:

What is your age?

Respondents that are 31 years old have two options, as do respondents that are 40 and 55. As a result, it is impossible to predict which category they will choose.

This can distort results and frustrate respondents. It can be easily avoided by making responses mutually exclusive.

The following question is much better:

This question is clear and will give us better results.

Ask specific questions

Nonspecific questions can confuse respondents and influence results.

Do you like orange juice?

  • Like very much
  • Neither like nor dislike
  • Dislike very much

This question is very unclear. Is it asking about taste, texture, price, or the nutritional content? Different respondents will read this question differently.

A specific question will get more specific answers that are actionable.

How much do you like the current price of orange juice?

This question is more specific and will get better results.

If you need to collect responses about more than one aspect of a subject, you can include multiple questions on it. (Do you like the taste of orange juice? Do you like the nutritional content of orange juice? etc.)

Use a variety of question types

If all of your questionnaire, survey or poll questions are structured the same way (e.g. yes/no or multiple choice) the respondents are likely to become bored and tune out. That could mean they pay less attention to how they’re answering or even give up altogether.

Instead, mix up the question types to keep the experience interesting and varied. It’s a good idea to include questions that yield both qualitative and quantitative data.

For example, an open-ended questionnaire item such as “describe your attitude to life” will provide qualitative data – a form of information that’s rich, unstructured and unpredictable. The respondent will tell you in their own words what they think and feel.

A quantitative / close-ended questionnaire item, such as “Which word describes your attitude to life? a) practical b) philosophical” gives you a much more structured answer, but the answers will be less rich and detailed.

Open-ended questions take more thought and effort to answer, so use them sparingly. They also require a different kind of treatment once your survey is in the analysis stage.

3. Pre-test your questionnaire

Always pre-test a questionnaire before sending it out to respondents. This will help catch any errors you might have missed. You could ask a colleague, friend, or an expert to take the survey and give feedback. If possible, ask a few cognitive questions like, “how did you get to that response?” and “what were you thinking about when you answered that question?” Figure out what was easy for the responder and where there is potential for confusion. You can then re-word where necessary to make the experience as frictionless as possible.

If your resources allow, you could also consider using a focus group to test out your survey. Having multiple respondents road-test the questionnaire will give you a better understanding of its strengths and weaknesses. Match the focus group to your target respondents as closely as possible, for example in terms of age, background, gender, and level of education.

Note: Don't forget to make your survey as accessible as possible for increased response rates.

Questionnaire examples and templates

There are free questionnaire templates and example questions available for all kinds of surveys and market research, many of them online. But they’re not all created equal and you should use critical judgement when selecting one. After all, the questionnaire examples may be free but the time and energy you’ll spend carrying out a survey are not.

If you’re using online questionnaire templates as the basis for your own, make sure it has been developed by professionals and is specific to the type of research you’re doing to ensure higher completion rates. As we’ve explored here, using the wrong kinds of questions can result in skewed or messy data, and could even prompt respondents to abandon the questionnaire without finishing or give thoughtless answers.

You’ll find a full library of downloadable survey templates in the Qualtrics Marketplace , covering many different types of research from employee engagement to post-event feedback . All are fully customizable and have been developed by Qualtrics experts.

Qualtrics // Experience Management

Qualtrics, the leader and creator of the experience management category, is a cloud-native software platform that empowers organizations to deliver exceptional experiences and build deep relationships with their customers and employees.

With insights from Qualtrics, organizations can identify and resolve the greatest friction points in their business, retain and engage top talent, and bring the right products and services to market. Nearly 20,000 organizations around the world use Qualtrics’ advanced AI to listen, understand, and take action. Qualtrics uses its vast universe of experience data to form the largest database of human sentiment in the world. Qualtrics is co-headquartered in Provo, Utah and Seattle.

Related Articles

May 20, 2024

Best strategy & research books to read in 2024

May 13, 2024

Experience Management

X4 2024 Strategy & Research Showcase: Introducing the future of insights generation

November 7, 2023

Brand Experience

The 4 market research trends redefining insights in 2024

June 27, 2023

The fresh insights people: Scaling research at Woolworths Group

June 20, 2023

Bank less, delight more: How Bankwest built an engine room for customer obsession

April 1, 2023

Academic Experience

How to write great survey questions (with examples)

March 21, 2023

Sample size calculator

November 18, 2022

Statistical analysis software: your complete guide to getting started

Stay up to date with the latest xm thought leadership, tips and news., request demo.

Ready to learn more about Qualtrics?

Qualitative study design: Surveys & questionnaires

  • Qualitative study design
  • Phenomenology
  • Grounded theory
  • Ethnography
  • Narrative inquiry
  • Action research
  • Case Studies
  • Field research
  • Focus groups
  • Observation
  • Surveys & questionnaires
  • Study Designs Home

Surveys & questionnaires

Qualitative surveys use open-ended questions to produce long-form written/typed answers. Questions will aim to reveal opinions, experiences, narratives or accounts. Often a useful precursor to interviews or focus groups as they help identify initial themes or issues to then explore further in the research. Surveys can be used iteratively, being changed and modified over the course of the research to elicit new information. 

Structured Interviews may follow a similar form of open questioning.  

Qualitative surveys frequently include quantitative questions to establish elements such as age, nationality etc. 

Qualitative surveys aim to elicit a detailed response to an open-ended topic question in the participant’s own words.  Like quantitative surveys, there are three main methods for using qualitative surveys including face to face surveys, phone surveys, and online surveys. Each method of surveying has strengths and limitations.

Face to face surveys  

  • Researcher asks participants one or more open-ended questions about a topic, typically while in view of the participant’s facial expressions and other behaviours while answering. Being able to view the respondent’s reactions enables the researcher to ask follow-up questions to elicit a more detailed response, and to follow up on any facial or behavioural cues that seem at odds with what the participants is explicitly saying.
  • Face to face qualitative survey responses are likely to be audio recorded and transcribed into text to ensure all detail is captured; however, some surveys may include both quantitative and qualitative questions using a structured or semi-structured format of questioning, and in this case the researcher may simply write down key points from the participant’s response.

Telephone surveys

  • Similar to the face to face method, but without researcher being able to see participant’s facial or behavioural responses to questions asked. This means the researcher may miss key cues that would help them ask further questions to clarify or extend participant responses to their questions, and instead relies on vocal cues.

Online surveys

  • Open-ended questions are presented to participants in written format via email or within an online survey tool, often alongside quantitative survey questions on the same topic.
  • Researchers may provide some contextualising information or key definitions to help ‘frame’ how participants view the qualitative survey questions, since they can’t directly ask the researcher about it in real time. 
  • Participants are requested to responses to questions in text ‘in some detail’ to explain their perspective or experience to researchers; this can result in diversity of responses (brief to detailed).
  • Researchers can not always probe or clarify participant responses to online qualitative survey questions which can result in data from these responses being cryptic or vague to the researcher.
  • Online surveys can collect a greater number of responses in a set period of time compared to face to face and phone survey approaches, so while data may be less detailed, there is more of it overall to compensate.

Qualitative surveys can help a study early on, in finding out the issues/needs/experiences to be explored further in an interview or focus group. 

Surveys can be amended and re-run based on responses providing an evolving and responsive method of research. 

Online surveys will receive typed responses reducing translation by the researcher 

Online surveys can be delivered broadly across a wide population with asynchronous delivery/response. 

Limitations

Hand-written notes will need to be transcribed (time-consuming) for digital study and kept physically for reference. 

Distance (or online) communication can be open to misinterpretations that cannot be corrected at the time. 

Questions can be leading/misleading, eliciting answers that are not core to the research subject. Researchers must aim to write a neutral question which does not give away the researchers expectations. 

Even with transcribed/digital responses analysis can be long and detailed, though not as much as in an interview. 

Surveys may be left incomplete if performed online or taken by research assistants not well trained in giving the survey/structured interview. 

Narrow sampling may skew the results of the survey. 

Example questions

Here are some example survey questions which are open ended and require a long form written response:

  • Tell us why you became a doctor? 
  • What do you expect from this health service? 
  • How do you explain the low levels of financial investment in mental health services? (WHO, 2007) 

Example studies

  • Davey, L. , Clarke, V. and Jenkinson, E. (2019), Living with alopecia areata: an online qualitative survey study. British Journal of Dermatology, 180 1377-1389. Retrieved from https://onlinelibrary-wiley-com.ezproxy-f.deakin.edu.au/doi/10.1111%2Fbjd.17463    
  • Richardson, J. (2004). What Patients Expect From Complementary Therapy: A Qualitative Study. American Journal of Public Health, 94(6), 1049–1053. Retrieved from http://ezproxy.deakin.edu.au/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=s3h&AN=13270563&site=eds-live&scope=site  
  • Saraceno, B., van Ommeren, M., Batniji, R., Cohen, A., Gureje, O., Mahoney, J., ... & Underhill, C. (2007). Barriers to improvement of mental health services in low-income and middle-income countries. The Lancet, 370(9593), 1164-1174. Retrieved from https://www-sciencedirect-com.ezproxy-f.deakin.edu.au/science/article/pii/S014067360761263X?via%3Dihub  

Below has more detail of the Lancet article including actual survey questions at: 

  • World Health Organization. (2007.) Expert opinion on barriers and facilitating factors for the implementation of existing mental health knowledge in mental health services. Geneva: World Health Organization. https://apps.who.int/iris/handle/10665/44808
  • Green, J. 1961-author., & Thorogood, N. (2018). Qualitative methods for health research. SAGE. Retrieved from http://ezproxy.deakin.edu.au/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=cat00097a&AN=deakin.b4151167&authtype=sso&custid=deakin&site=eds-live&scope=site   
  • JANSEN, H. The Logic of Qualitative Survey Research and its Position in the Field of Social Research Methods. Forum Qualitative Sozialforschung, 11(2), Retrieved from http://www.qualitative-research.net/index.php/fqs/article/view/1450/2946  
  • Neilsen Norman Group, (2019). 28 Tips for Creating Great Qualitative Surveys. Retrieved from https://www.nngroup.com/articles/qualitative-surveys/     
  • << Previous: Documents
  • Next: Interviews >>
  • Last Updated: Jul 3, 2024 11:46 AM
  • URL: https://deakin.libguides.com/qualitative-study-designs

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Writing Survey Questions

Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.

Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.

Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.

For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.

Question development

There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.

At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as  focus groups , cognitive interviews, pretesting (often using an  online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.

Measuring change over time

Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.

When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see  question wording  and  question order  for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.

The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.

Open- and closed-ended questions

One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.

For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.

When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see  “High Marks for the Campaign, a High Bar for Obama”  for more information.)

research studies questionnaires

Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.

When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.

In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.

In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).

Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.

Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.

Question wording

The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.

[View more Methods 101 Videos ]

An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule  even if it meant that U.S. forces might suffer thousands of casualties, ” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.

There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:

First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions.  Based on that research, the Center generally avoids using select-all-that-apply questions.

It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.

In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose  not  allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.

Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”

We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two  forms  of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.

research studies questionnaires

One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.

One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).

Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.

Question order

Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).

One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.

For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).

research studies questionnaires

An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.

research studies questionnaires

Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.

Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).

Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.

Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).

The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see  measuring change over time  for more information).

A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.

U.S. Surveys

Other research methods.

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

Research-Methodology

Questionnaires

Questionnaires can be classified as both, quantitative and qualitative method depending on the nature of questions. Specifically, answers obtained through closed-ended questions (also called restricted questions) with multiple choice answer options are analyzed using quantitative methods. Research findings in this case can be illustrated using tabulations, pie-charts, bar-charts and percentages.

Answers obtained to open-ended questionnaire questions (also known as unrestricted questions), on the other hand, are analyzed using qualitative methods. Primary data collected using open-ended questionnaires involve discussions and critical analyses without use of numbers and calculations.

There are following types of questionnaires:

Computer questionnaire . Respondents are asked to answer the questionnaire which is sent by mail. The advantages of the computer questionnaires include their inexpensive price, time-efficiency, and respondents do not feel pressured, therefore can answer when they have time, giving more accurate answers. However, the main shortcoming of the mail questionnaires is that sometimes respondents do not bother answering them and they can just ignore the questionnaire.

Telephone questionnaire .  Researcher may choose to call potential respondents with the aim of getting them to answer the questionnaire. The advantage of the telephone questionnaire is that, it can be completed during the short amount of time. The main disadvantage of the phone questionnaire is that it is expensive most of the time. Moreover, most people do not feel comfortable to answer many questions asked through the phone and it is difficult to get sample group to answer questionnaire over the phone.

In-house survey .  This type of questionnaire involves the researcher visiting respondents in their houses or workplaces. The advantage of in-house survey is that more focus towards the questions can be gained from respondents. However, in-house surveys also have a range of disadvantages which include being time consuming, more expensive and respondents may not wish to have the researcher in their houses or workplaces for various reasons.

Mail Questionnaire . This sort of questionnaires involve the researcher to send the questionnaire list to respondents through post, often attaching pre-paid envelope. Mail questionnaires have an advantage of providing more accurate answer, because respondents can answer the questionnaire in their spare time. The disadvantages associated with mail questionnaires include them being expensive, time consuming and sometimes they end up in the bin put by respondents.

Questionnaires can include the following types of questions:

Open question questionnaires . Open questions differ from other types of questions used in questionnaires in a way that open questions may produce unexpected results, which can make the research more original and valuable. However, it is difficult to analyze the results of the findings when the data is obtained through the questionnaire with open questions.

Multiple choice question s. Respondents are offered a set of answers they have to choose from. The downsize of questionnaire with multiple choice questions is that, if there are too many answers to choose from, it makes the questionnaire, confusing and boring, and discourages the respondent to answer the questionnaire.

Dichotomous Questions .  Thes type of questions gives two options to respondents – yes or no, to choose from. It is the easiest form of questionnaire for the respondent in terms of responding it.

Scaling Questions . Also referred to as ranking questions, they present an option for respondents to rank the available answers to questions on the scale of given range of values (for example from 1 to 10).

For a standard 15,000-20,000 word business dissertation including 25-40 questions in questionnaires will usually suffice. Questions need be formulated in an unambiguous and straightforward manner and they should be presented in a logical order.

Questionnaires as primary data collection method offer the following advantages:

  • Uniformity: all respondents are asked exactly the same questions
  • Cost-effectiveness
  • Possibility to collect the primary data in shorter period of time
  • Minimum or no bias from the researcher during the data collection process
  • Usually enough time for respondents to think before answering questions, as opposed to interviews
  • Possibility to reach respondents in distant areas through online questionnaire

At the same time, the use of questionnaires as primary data collection method is associated with the following shortcomings:

  • Random answer choices by respondents without properly reading the question.
  • In closed-ended questionnaires no possibility for respondents to express their additional thoughts about the matter due to the absence of a relevant question.
  • Collecting incomplete or inaccurate information because respondents may not be able to understand questions correctly.
  • High rate of non-response

Survey Monkey represents one of the most popular online platforms for facilitating data collection through questionnaires. Substantial benefits offered by Survey Monkey include its ease to use, presentation of questions in many different formats and advanced data analysis capabilities.

Questionnaires

Survey Monkey as a popular platform for primary data collection

There are other alternatives to Survey Monkey you might want to consider to use as a platform for your survey. These include but not limited to Jotform, Google Forms, Lime Survey, Crowd Signal, Survey Gizmo, Zoho Survey and many others.

My  e-book,  The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step approach  contains a detailed, yet simple explanation of quantitative methods. The e-book explains all stages of the research process starting from the selection of the research area to writing personal reflection. Important elements of dissertations such as research philosophy, research approach, research design, methods of data collection and data analysis are explained in simple words.

John Dudovskiy

Questionnaires

  • PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
  • EDIT Edit this Article
  • EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Happiness Hub Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
  • Browse Articles
  • Learn Something New
  • Quizzes Hot
  • Happiness Hub
  • This Or That Game
  • Train Your Brain
  • Explore More
  • Support wikiHow
  • About wikiHow
  • Log in / Sign up
  • Education and Communications

How to Develop a Questionnaire for Research

Last Updated: July 21, 2024 Fact Checked

This article was co-authored by Alexander Ruiz, M.Ed. . Alexander Ruiz is an Educational Consultant and the Educational Director of Link Educational Institute, a tutoring business based in Claremont, California that provides customizable educational plans, subject and test prep tutoring, and college application consulting. With over a decade and a half of experience in the education industry, Alexander coaches students to increase their self-awareness and emotional intelligence while achieving skills and the goal of achieving skills and higher education. He holds a BA in Psychology from Florida International University and an MA in Education from Georgia Southern University. There are 12 references cited in this article, which can be found at the bottom of the page. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 593,433 times.

A questionnaire is a technique for collecting data in which a respondent provides answers to a series of questions. [1] X Research source To develop a questionnaire that will collect the data you want takes effort and time. However, by taking a step-by-step approach to questionnaire development, you can come up with an effective means to collect data that will answer your unique research question.

Designing Your Questionnaire

Step 1 Identify the goal of your questionnaire.

  • Come up with a research question. It can be one question or several, but this should be the focal point of your questionnaire.
  • Develop one or several hypotheses that you want to test. The questions that you include on your questionnaire should be aimed at systematically testing these hypotheses.

Step 2 Choose your question type or types.

  • Dichotomous question: this is a question that will generally be a “yes/no” question, but may also be an “agree/disagree” question. It is the quickest and simplest question to analyze, but is not a highly sensitive measure.
  • Open-ended questions: these questions allow the respondent to respond in their own words. They can be useful for gaining insight into the feelings of the respondent, but can be a challenge when it comes to analysis of data. It is recommended to use open-ended questions to address the issue of “why.” [2] X Research source
  • Multiple choice questions: these questions consist of three or more mutually-exclusive categories and ask for a single answer or several answers. [3] X Research source Multiple choice questions allow for easy analysis of results, but may not give the respondent the answer they want.
  • Rank-order (or ordinal) scale questions: this type of question asks your respondent to rank items or choose items in a particular order from a set. For example, it might ask your respondents to order five things from least to most important. These types of questions forces discrimination among alternatives, but does not address the issue of why the respondent made these discriminations. [4] X Research source
  • Rating scale questions: these questions allow the respondent to assess a particular issue based on a given dimension. You can provide a scale that gives an equal number of positive and negative choices, for example, ranging from “strongly agree” to “strongly disagree.” [5] X Research source These questions are very flexible, but also do not answer the question “why.”

Step 3 Develop questions for your questionnaire.

  • Write questions that are succinct and simple. You should not be writing complex statements or using technical jargon, as it will only confuse your respondents and lead to incorrect responses.
  • Ask only one question at a time. This will help avoid confusion
  • Asking questions such as these usually require you to anonymize or encrypt the demographic data you collect.
  • Determine if you will include an answer such as “I don’t know” or “Not applicable to me.” While these can give your respondents a way of not answering certain questions, providing these options can also lead to missing data, which can be problematic during data analysis.
  • Put the most important questions at the beginning of your questionnaire. This can help you gather important data even if you sense that your respondents may be becoming distracted by the end of the questionnaire.

Step 4 Restrict the length of your questionnaire.

  • Only include questions that are directly useful to your research question. [8] X Trustworthy Source Food and Agricultural Organization of the United Nations Specialized agency of the United Nations responsible for leading international efforts to end world hunger and improve nutrition Go to source A questionnaire is not an opportunity to collect all kinds of information about your respondents.
  • Avoid asking redundant questions. This will frustrate those who are taking your questionnaire.

Step 5 Identify your target demographic.

  • Consider if you want your questionnaire to collect information from both men and women. Some studies will only survey one sex.
  • Consider including a range of ages in your target demographic. For example, you can consider young adult to be 18-29 years old, adults to be 30-54 years old, and mature adults to be 55+. Providing the an age range will help you get more respondents than limiting yourself to a specific age.
  • Consider what else would make a person a target for your questionnaire. Do they need to drive a car? Do they need to have health insurance? Do they need to have a child under 3? Make sure you are very clear about this before you distribute your questionnaire.

Step 6 Ensure you can protect privacy.

  • Consider an anonymous questionnaire. You may not want to ask for names on your questionnaire. This is one step you can take to prevent privacy, however it is often possible to figure out a respondent’s identity using other demographic information (such as age, physical features, or zipcode).
  • Consider de-identifying the identity of your respondents. Give each questionnaire (and thus, each respondent) a unique number or word, and only refer to them using that new identifier. Shred any personal information that can be used to determine identity.
  • Remember that you do not need to collect much demographic information to be able to identify someone. People may be wary to provide this information, so you may get more respondents by asking less demographic questions (if it is possible for your questionnaire).
  • Make sure you destroy all identifying information after your study is complete.

Writing your questionnaire

Step 1 Introduce yourself.

  • My name is Jack Smith and I am one of the creators of this questionnaire. I am part of the Department of Psychology at the University of Michigan, where I am focusing in developing cognition in infants.
  • I’m Kelly Smith, a 3rd year undergraduate student at the University of New Mexico. This questionnaire is part of my final exam in statistics.
  • My name is Steve Johnson, and I’m a marketing analyst for The Best Company. I’ve been working on questionnaire development to determine attitudes surrounding drug use in Canada for several years.

Step 2 Explain the purpose of the questionnaire.

  • I am collecting data regarding the attitudes surrounding gun control. This information is being collected for my Anthropology 101 class at the University of Maryland.
  • This questionnaire will ask you 15 questions about your eating and exercise habits. We are attempting to make a correlation between healthy eating, frequency of exercise, and incidence of cancer in mature adults.
  • This questionnaire will ask you about your recent experiences with international air travel. There will be three sections of questions that will ask you to recount your recent trips and your feelings surrounding these trips, as well as your travel plans for the future. We are looking to understand how a person’s feelings surrounding air travel impact their future plans.

Step 3 Reveal what will happen with the data you collect.

  • Beware that if you are collecting information for a university or for publication, you may need to check in with your institution’s Institutional Review Board (IRB) for permission before beginning. Most research universities have a dedicated IRB staff, and their information can usually be found on the school’s website.
  • Remember that transparency is best. It is important to be honest about what will happen with the data you collect.
  • Include an informed consent for if necessary. Note that you cannot guarantee confidentiality, but you will make all reasonable attempts to ensure that you protect their information. [11] X Research source

Step 4 Estimate how long the questionnaire will take.

  • Time yourself taking the survey. Then consider that it will take some people longer than you, and some people less time than you.
  • Provide a time range instead of a specific time. For example, it’s better to say that a survey will take between 15 and 30 minutes than to say it will take 15 minutes and have some respondents quit halfway through.
  • Use this as a reason to keep your survey concise! You will feel much better asking people to take a 20 minute survey than you will asking them to take a 3 hour one.

Step 5 Describe any incentives that may be involved.

  • Incentives can attract the wrong kind of respondent. You don’t want to incorporate responses from people who rush through your questionnaire just to get the reward at the end. This is a danger of offering an incentive. [12] X Research source
  • Incentives can encourage people to respond to your survey who might not have responded without a reward. This is a situation in which incentives can help you reach your target number of respondents. [13] X Research source
  • Consider the strategy used by SurveyMonkey. Instead of directly paying respondents to take their surveys, they offer 50 cents to the charity of their choice when a respondent fills out a survey. They feel that this lessens the chances that a respondent will fill out a questionnaire out of pure self-interest. [14] X Research source
  • Consider entering each respondent in to a drawing for a prize if they complete the questionnaire. You can offer a 25$ gift card to a restaurant, or a new iPod, or a ticket to a movie. This makes it less tempting just to respond to your questionnaire for the incentive alone, but still offers the chance of a pleasant reward.

Step 6 Make sure your questionnaire looks professional.

  • Always proof read. Check for spelling, grammar, and punctuation errors.
  • Include a title. This is a good way for your respondents to understand the focus of the survey as quickly as possible.
  • Thank your respondents. Thank them for taking the time and effort to complete your survey.

Distributing Your Questionnaire

Step 1 Do a pilot study.

  • Was the questionnaire easy to understand? Were there any questions that confused you?
  • Was the questionnaire easy to access? (Especially important if your questionnaire is online).
  • Do you feel the questionnaire was worth your time?
  • Were you comfortable answering the questions asked?
  • Are there any improvements you would make to the questionnaire?

Step 2 Disseminate your questionnaire.

  • Use an online site, such as SurveyMonkey.com. This site allows you to write your own questionnaire with their survey builder, and provides additional options such as the option to buy a target audience and use their analytics to analyze your data. [18] X Research source
  • Consider using the mail. If you mail your survey, always make sure you include a self-addressed stamped envelope so that the respondent can easily mail their responses back. Make sure that your questionnaire will fit inside a standard business envelope.
  • Conduct face-to-face interviews. This can be a good way to ensure that you are reaching your target demographic and can reduce missing information in your questionnaires, as it is more difficult for a respondent to avoid answering a question when you ask it directly.
  • Try using the telephone. While this can be a more time-effective way to collect your data, it can be difficult to get people to respond to telephone questionnaires.

Step 3 Include a deadline.

  • Make your deadline reasonable. Giving respondents up to 2 weeks to answer should be more than sufficient. Anything longer and you risk your respondents forgetting about your questionnaire.
  • Consider providing a reminder. A week before the deadline is a good time to provide a gentle reminder about returning the questionnaire. Include a replacement of the questionnaire in case it has been misplaced by your respondent.

Community Q&A

Community Answer

You Might Also Like

Do a Science Investigatory Project

  • ↑ https://www.questionpro.com/blog/what-is-a-questionnaire/
  • ↑ https://www.hotjar.com/blog/open-ended-questions/
  • ↑ https://www.questionpro.com/a/showArticle.do?articleID=survey-questions
  • ↑ https://surveysparrow.com/blog/ranking-questions-examples/
  • ↑ https://www.lumoa.me/blog/rating-scale/
  • ↑ http://www.sciencebuddies.org/science-fair-projects/project_ideas/Soc_survey.shtml
  • ↑ http://www.fao.org/docrep/W3241E/w3241e05.htm
  • ↑ http://managementhelp.org/businessresearch/questionaires.htm
  • ↑ https://www.surveymonkey.com/mp/survey-rewards/
  • ↑ http://www.ideafit.com/fitness-library/how-to-develop-a-questionnaire
  • ↑ https://www.surveymonkey.com/mp/take-a-tour/?ut_source=header

About This Article

Alexander Ruiz, M.Ed.

To develop a questionnaire for research, identify the main objective of your research to act as the focal point for the questionnaire. Then, choose the type of questions that you want to include, and come up with succinct, straightforward questions to gather the information that you need to answer your questions. Keep your questionnaire as short as possible, and identify a target demographic who you would like to answer the questions. Remember to make the questionnaires as anonymous as possible to protect the integrity of the person answering the questions! For tips on writing out your questions and distributing the questionnaire, keep reading! Did this summary help you? Yes No

  • Send fan mail to authors

Reader Success Stories

Abdul Bari Khan

Abdul Bari Khan

Nov 11, 2020

Did this article help you?

Abdul Bari Khan

Jul 25, 2023

Iman Ilhusadi

Iman Ilhusadi

Nov 26, 2016

Jaydeepa Das

Jaydeepa Das

Aug 21, 2018

Atefeh Abdollahi

Atefeh Abdollahi

Jan 3, 2017

Do I Have a Dirty Mind Quiz

Featured Articles

Enjoy Your Preteen Years

Trending Articles

Pirate Name Generator

Watch Articles

Make Fluffy Pancakes

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

wikiHow Tech Help Pro:

Level up your tech skills and stay ahead of the curve

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Starting the research process
  • 10 Research Question Examples to Guide Your Research Project

10 Research Question Examples to Guide your Research Project

Published on October 30, 2022 by Shona McCombes . Revised on October 19, 2023.

The research question is one of the most important parts of your research paper , thesis or dissertation . It’s important to spend some time assessing and refining your question before you get started.

The exact form of your question will depend on a few things, such as the length of your project, the type of research you’re conducting, the topic , and the research problem . However, all research questions should be focused, specific, and relevant to a timely social or scholarly issue.

Once you’ve read our guide on how to write a research question , you can use these examples to craft your own.

Research question Explanation
The first question is not enough. The second question is more , using .
Starting with “why” often means that your question is not enough: there are too many possible answers. By targeting just one aspect of the problem, the second question offers a clear path for research.
The first question is too broad and subjective: there’s no clear criteria for what counts as “better.” The second question is much more . It uses clearly defined terms and narrows its focus to a specific population.
It is generally not for academic research to answer broad normative questions. The second question is more specific, aiming to gain an understanding of possible solutions in order to make informed recommendations.
The first question is too simple: it can be answered with a simple yes or no. The second question is , requiring in-depth investigation and the development of an original argument.
The first question is too broad and not very . The second question identifies an underexplored aspect of the topic that requires investigation of various  to answer.
The first question is not enough: it tries to address two different (the quality of sexual health services and LGBT support services). Even though the two issues are related, it’s not clear how the research will bring them together. The second integrates the two problems into one focused, specific question.
The first question is too simple, asking for a straightforward fact that can be easily found online. The second is a more question that requires and detailed discussion to answer.
? dealt with the theme of racism through casting, staging, and allusion to contemporary events? The first question is not  — it would be very difficult to contribute anything new. The second question takes a specific angle to make an original argument, and has more relevance to current social concerns and debates.
The first question asks for a ready-made solution, and is not . The second question is a clearer comparative question, but note that it may not be practically . For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

Note that the design of your research question can depend on what method you are pursuing. Here are a few options for qualitative, quantitative, and statistical research questions.

Type of research Example question
Qualitative research question
Quantitative research question
Statistical research question

Other interesting articles

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

Methodology

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, October 19). 10 Research Question Examples to Guide your Research Project. Scribbr. Retrieved September 3, 2024, from https://www.scribbr.com/research-process/research-question-examples/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, writing strong research questions | criteria & examples, how to choose a dissertation topic | 8 steps to follow, evaluating sources | methods & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.328(7452); 2004 Jun 5

Hands-on guide to questionnaire research

Administering, analysing, and reporting your questionnaire, petra m boynton.

1 Department of Primary Care and Population Sciences, University College London, London N19 5LW [email protected]

Associated Data

Short abstract.

Understanding your study group is key to getting a good response to a questionnaire; dealing with the resulting mass of data is another challenge

The first step in producing good questionnaire research is getting the right questionnaire. 1 However, even the best questionnaire will not get adequate results if it is not used properly. This article outlines how to pilot your questionnaire, distribute and administer it; and get it returned, analysed, and written up for publication. It is intended to supplement published guidance on questionnaire research, three quarters of which focuses on content and design. 2

Questionnaires tend to fail because participants don't understand them, can't complete them, get bored or offended by them, or dislike how they look. Although friends and colleagues can help check spelling, grammar, and layout, they cannot reliably predict the emotional reactions or comprehension difficulties of other groups. Whether you have constructed your own questionnaire or are using an existing instrument, always pilot it on participants who are representative of your definitive sample. You need to build in protected time for this phase and get approval from an ethics committee. 3

During piloting, take detailed notes on how participants react to both the general format of your instrument and the specific questions. How long do people take to complete it? Do any questions need to be repeated or explained? How do participants indicate that they have arrived at an answer? Do they show confusion or surprise at a particular response—if so, why? Short, abrupt questions may unintentionally provoke short, abrupt answers. Piloting will provide a guide for rephrasing questions to invite a richer response (box 1). ​ 1).

An external file that holds a picture, illustration, etc.
Object name is boyp118794.f1.jpg

Box 1: Patient preference is preferable

I worked on a sexual health study where we initially planned to present the questionnaire on a computer, since we had read people were supposedly more comfortable “talking” to a computer. Although this seemed to be the case in practices with middle class patients, we struggled to recruit in practices where participants were less familiar with computers. Their reasons for refusal were not linked to the topic of the research, but because they saw our laptops as something they might break, could make them look foolish, or would feed directly to the internet (which was inextricably linked to computers in some people's minds). We found offering a choice between completing the questionnaire on paper or the laptop computer greatly increased response rates.

Planning data collection

You should be aware of the relevant data protection legislation (for United Kingdom see www.informationcommissioner.gov.uk ) and ensure that you follow internal codes of practice for your institution—for example, obtaining and completing a form from your data protection officer. Do not include names, addresses, or other identifying markers within your electronic database, except for a participant number linked to a securely kept manual file.

The piloting phase should include planning and testing a strategy for getting your questionnaire out and back—for example, who you have invited to complete it (the sampling frame), who has agreed to do so (the response rate), who you've had usable returns from (the completion rate), and whether and when you needed to send a reminder letter. If you are employing researchers to deliver and collect the questionnaire it's important they know exactly how to do this. 4

Administrative errors can hamper the progress of your research. Real examples include researchers giving the questionnaire to wrong participants (for example, a questionnaire aimed at men given to women); incomplete instructions on how to fill in the questionnaire (for example, participants did not know whether to tick one or several items); postal surveys in which the questionnaire was missing from the envelope; and a study of over 3000 participants in which the questionnaire was sent out with no return address.

Administering your questionnaire

The choice of how to administer a questionnaire is too often made on convenience or cost grounds (see table A on bmj.com ). Scientific and ethical considerations should include:

  • The needs and preferences of participants, who should understand what is required of them; remain interested and cooperative throughout completion; be asked the right questions and have their responses recorded accurately; and receive appropriate support during and after completing the questionnaire
  • The skills and resources available to your research team
  • The nature of your study—for example, short term feasibility projects, clinical trials, or large scale surveys.

Maximising your response rate

Sending out hundreds of questionnaires is a thankless task, and it is sometimes hard to pay attention to the many minor details that combine to raise response and completion rates. Extensive evidence exists on best practice (box 2), and principal investigators should ensure that they provide their staff with the necessary time and resources to follow it. Note, however, that it is better to collect fewer questionnaires with good quality responses than high numbers of questionnaires that are inaccurate or incomplete. The third article in this series discusses how to maximise response rates from groups that are hard to research. 15

Accounting for those who refuse to participate

Survey research tends to focus on people who have completed the study. Yet those who don't participate are equally important scientifically, and their details should also be recorded (remember to seek ethical approval for this). 4 , 16 , 17

Box 2: Factors shown to increase response rates

  • The questionnaire is clearly designed and has a simple layout 5
  • It offers participants incentives or prizes in return for completion 6
  • It has been thoroughly piloted and tested 5
  • Participants are notified about the study in advance with a personalised invitation 7
  • The aim of study and means of completing the questionnaire are clearly explained 8 , 9
  • A researcher is available to answer questions and collect the completed questionnaire 10
  • If using a postal questionnaire, a stamped addressed envelope is included 7
  • The participant feels they are a stakeholder in the study 11
  • Questions are phrased in a way that holds the participant's attention 11
  • Questionnaire has clear focus and purpose and is kept concise 7 , 8 , 11
  • The questionnaire is appealing to look at, 12 as is the researcher 13
  • If appropriate, the questionnaire is delivered electronically 14

One way of reducing refusal and non-completion rates is to set strict exclusion criteria at the start of your research. For example, for practical reasons many studies exclude participants who are unable to read or write in the language of the questionnaire and those with certain physical and mental disabilities that might interfere with their ability to give informed consent, cooperate with the researcher, or understand the questions asked. However, research that systematically excludes hard to reach groups is increasingly seen as unethical, and you may need to build additional strategies and resources into your study protocol at the outset. 15 Keep a record of all participants that fit the different exclusion categories (see bmj.com ).

Collecting data on non-participants will also allow you to monitor the research process. For example, you may find that certain researchers seem to have a higher proportion of participants refusing, and if so you should work with those individuals to improve the way they introduce the research or seek consent. In addition, if early refusals are found to be unusually high, you might need to rethink your overall approach. 10

Entering, checking, and cleaning data

Novice researchers often assume that once they have selected, designed, and distributed their questionnaire, their work is largely complete. In reality, entering, checking, and cleaning the data account for much of the workload. Some principles for keeping quantitative data clean are listed on bmj.com .

Even if a specialist team sets up the database(s), all researchers should be taught how to enter, clean, code, and back up the data, and the system for doing this should be universally agreed and understood. Agree on the statistical package you wish to use (such as SPSS, Stata, EpiInfo, Excel, or Access) and decide on a coding system before anyone starts work on the dataset.

It is good practice to enter data into an electronic database as the study progresses rather than face a mountain of processing at the end. The project manager should normally take responsibility for coordinating and overseeing this process and for ensuring that all researchers know what their role is with data management. These and other management tasks are time consuming and must be built into the study protocol and budget. Include data entry and coding in any pilot study to get an estimate of the time required and potential problems to troubleshoot.

Analysing your data

You should be able to predict the type of analysis required for your different questionnaire items at the planning stage of your study by considering the structure of each item and the likely distribution of responses (box 3). 1 Table B on bmj.com shows some examples of data analysis methods for different types of responses. 18 , 19 w1

Writing up and reporting

Once you have completed your data analysis, you will need to think creatively about the clearest and most parsimonious way to report and present your findings. You will almost certainly find that you have too much data to fit into a standard journal article, dissertation, or research report, so deciding what to include and omit is crucial. Take statistical advice from the outset of your research. This can keep you focused on the hypothesis or question you are testing and the important results from your study (and therefore what tables and graphs to present).

Box 3: Nasty surprise from a simple questionnaire

Moshe selected a standardised measure on emotional wellbeing to use in his research, which looked easy to complete and participants answered readily. When he came to analysing his data, he discovered that rather than scoring each response directly as indicated on the questionnaire, a complicated computer algorithm had to be created, and he was stumped. He found a statistician to help with the recoding, and realised that for future studies it might be an idea to check both the measure and its scoring system before selecting it.

Box 4: An unexpected result

Priti, a specialist registrar in hepatology, completed an attitude questionnaire in patients having liver transplantation and those who were still waiting for a donor. She expected to find that those who had received a new liver would be happier than those awaiting a donor. However, the morale scale used in her questionnaire showed that the transplantation group did not have significantly better morale scores. Priti felt that this negative finding was worth further investigation.

Methods section

The methods section should give details of your exclusion criteria and discuss their implications for the transferability of your findings. Data on refusals and unsuitable participants should also be presented and discussed, preferably using a recruitment diagram. w2 Finally, state and justify the statistical or qualitative analyses used. 18 , 19 w2

Results section

When compiling the results section you should return to your original research question and set out the findings that addressed this. In other words, make sure your results are hypothesis driven. Do not be afraid to report non-significant results, which in reality are often as important as significant results—for example, if participants did not experience anxiety in a particular situation (box 4). Don't analyse and report on every question within your questionnaire

Choose the most statistically appropriate and visually appealing format for graphs ( table ). w3 Label graphs and their axes adequately and include meaningful titles for tables and diagrams. Refer your reader to any tables or graphs within your text, and highlight the main findings.

Examples of ways of presenting data and when to use them

Data table If you need to produce something that is simple and quick and that has a low publication cost for journals. If you want to make data accessible to the interested reader for further manipulations Do not use if you want to make your work look visually appealing. Too many tables can weigh down the results section and obscure the really key results. The reader is forced to work too hard and may give up reading your report
Bar chart If you need to convey changes and differences, particularly between groups (eg how men and women differed in their views on an exercise programme for recovering heart attack patients) If your data are linear and each item is related to the previous then you should use a (line) graph. Bar charts treat data as though they are separate groups not continuous variables
Scatter graph Mostly used for displaying correlations or regressions (eg association between number of cigarettes smoked and reduced lung capacity) If your data are based on groups or aggregated outcomes rather than individual scores
Pie chart Used for simple summaries of data, particularly if a small number of choices were provided As with bar charts, avoid if you want to present linear or relational data
Line graph Where the points on the graph are logically linked, usually in time (eg scores on quality of life and emotional wellbeing measures taken monthly over six months) If your data were not linked over time, repetition, etc it is inappropriate to suggest a linear relation by presenting findings in this format

If you have used open ended questions within your questionnaire, do not cherry pick quotes for your results section. You need to outline what main themes emerged, and use quotes as necessary to illustrate the themes and supplement your quantitative findings.

Discussion section

The discussion should refer back to the results section and suggest what the main findings mean. You should acknowledge the limitations of your study and couch the discussion in the light of these. For example, if your response rate was low, you may need to recommend further studies to confirm your preliminary results. Your conclusions must not go beyond the scope of your study—for example, if you have done a small, parochial study do not suggest changes in national policy. You should also discuss any questions your participants persistently refused to answer or answered in a way you didn't expect.

Taking account of psychological and social influences

Questionnaire research (and indeed science in general) can never be completely objective. Researchers and participants are all human beings with psychological, emotional, and social needs. Too often, we fail to take these factors into account when planning, undertaking, and analysing our work. A questionnaire means something different to participants and researchers. w4 Researchers want data (with a view to publications, promotion, academic recognition, and further grant income). Junior research staff and administrators, especially if poorly trained and supervised, may be put under pressure, leading to critical errors in piloting (for example, piloting on friends rather than the target group), sampling (for example, drifting towards convenience rather than random samples) and in the distribution, collection, and coding of questionnaires. 15 Staff employed to assist with a questionnaire study may not be familiar with all the tasks required to make it a success and may be unaware that covering up their ignorance or skill deficits will make the entire study unsound.

Summary points

Piloting is essential to check the questionnaire works in the study group and identify administrative and analytical problems

The method of administration should be determined by scientific considerations not just costs

Entering, checking, and cleaning data should be done as the study progresses

Don't try to include all the results when reporting studies

Do include exclusion criteria and data on non-respondents

Research participants, on the other hand, may be motivated to complete a questionnaire through interest, boredom, a desire to help others (particularly true in health studies), because they feel pressurised to do so, through loneliness, or for an unconscious ulterior motive (“pleasing the doctor”). All of these introduce potential biases into the recruitment and data collection process.

Supplementary Material

This is the second in a series of three articles edited by Trisha Greenhalgh

I thank Alicia O'Cathain, Trish Greenhalgh, Jill Russell, Geoff Wong, Marcia Rigby, Sara Shaw, Fraser Macfarlane, and Will Callaghan for their helpful feedback on earlier versions of this paper and Gary Wood for advice on statistics and analysis.

PMB has taught research methods in a primary care setting for the past 13 years, specialising in practical approaches and using the experiences and concerns of researchers and participants as the basis of learning. This series of papers arose directly from questions asked about real questionnaire studies. To address these questions she and Trisha Greenhalgh explored a wide range of sources from the psychological and health services research literature.

Competing interests: None declared.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Questionnaire Design | Methods, Question Types & Examples

Questionnaire Design | Methods, Question Types & Examples

Published on 6 May 2022 by Pritha Bhandari . Revised on 10 October 2022.

A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information.

Questionnaires are commonly used in market research as well as in the social and health sciences. For example, a company may ask for feedback about a recent customer service experience, or psychology researchers may investigate health risk perceptions using questionnaires.

Table of contents

Questionnaires vs surveys, questionnaire methods, open-ended vs closed-ended questions, question wording, question order, step-by-step guide to design, frequently asked questions about questionnaire design.

A survey is a research method where you collect and analyse data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.

Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

But designing a questionnaire is only one component of survey research. Survey research also involves defining the population you’re interested in, choosing an appropriate sampling method , administering questionnaires, data cleaning and analysis, and interpretation.

Sampling is important in survey research because you’ll often aim to generalise your results to the population. Gather data from a sample that represents the range of views in the population for externally valid results. There will always be some differences between the population and the sample, but minimising these will help you avoid sampling bias .

Prevent plagiarism, run a free check.

Questionnaires can be self-administered or researcher-administered . Self-administered questionnaires are more common because they are easy to implement and inexpensive, but researcher-administered questionnaires allow deeper insights.

Self-administered questionnaires

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or by post. All questions are standardised so that all respondents receive the same questions with identical wording.

Self-administered questionnaires can be:

  • Cost-effective
  • Easy to administer for small and large groups
  • Anonymous and suitable for sensitive topics

But they may also be:

  • Unsuitable for people with limited literacy or verbal skills
  • Susceptible to a nonreponse bias (most people invited may not complete the questionnaire)
  • Biased towards people who volunteer because impersonal survey requests often go ignored

Researcher-administered questionnaires

Researcher-administered questionnaires are interviews that take place by phone, in person, or online between researchers and respondents.

Researcher-administered questionnaires can:

  • Help you ensure the respondents are representative of your target audience
  • Allow clarifications of ambiguous or unclear questions and answers
  • Have high response rates because it’s harder to refuse an interview when personal attention is given to respondents

But researcher-administered questionnaires can be limiting in terms of resources. They are:

  • Costly and time-consuming to perform
  • More difficult to analyse if you have qualitative responses
  • Likely to contain experimenter bias or demand characteristics
  • Likely to encourage social desirability bias in responses because of a lack of anonymity

Your questionnaire can include open-ended or closed-ended questions, or a combination of both.

Using closed-ended questions limits your responses, while open-ended questions enable a broad range of answers. You’ll need to balance these considerations with your available time and resources.

Closed-ended questions

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. Closed-ended questions are best for collecting data on categorical or quantitative variables.

Categorical variables can be nominal or ordinal. Quantitative variables can be interval or ratio. Understanding the type of variable and level of measurement means you can perform appropriate statistical analyses for generalisable results.

Examples of closed-ended questions for different variables

Nominal variables include categories that can’t be ranked, such as race or ethnicity. This includes binary or dichotomous categories.

It’s best to include categories that cover all possible answers and are mutually exclusive. There should be no overlap between response items.

In binary or dichotomous questions, you’ll give respondents only two options to choose from.

White Black or African American American Indian or Alaska Native Asian Native Hawaiian or Other Pacific Islander

Ordinal variables include categories that can be ranked. Consider how wide or narrow a range you’ll include in your response items, and their relevance to your respondents.

Likert-type questions collect ordinal data using rating scales with five or seven points.

When you have four or more Likert-type questions, you can treat the composite data as quantitative data on an interval scale . Intelligence tests, psychological scales, and personality inventories use multiple Likert-type questions to collect interval data.

With interval or ratio data, you can apply strong statistical hypothesis tests to address your research aims.

Pros and cons of closed-ended questions

Well-designed closed-ended questions are easy to understand and can be answered quickly. However, you might still miss important answers that are relevant to respondents. An incomplete set of response items may force some respondents to pick the closest alternative to their true answer. These types of questions may also miss out on valuable detail.

To solve these problems, you can make questions partially closed-ended, and include an open-ended option where respondents can fill in their own answer.

Open-ended questions

Open-ended, or long-form, questions allow respondents to give answers in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered. For example, respondents may want to answer ‘multiracial’ for the question on race rather than selecting from a restricted list.

  • How do you feel about open science?
  • How would you describe your personality?
  • In your opinion, what is the biggest obstacle to productivity in remote work?

Open-ended questions have a few downsides.

They require more time and effort from respondents, which may deter them from completing the questionnaire.

For researchers, understanding and summarising responses to these questions can take a lot of time and resources. You’ll need to develop a systematic coding scheme to categorise answers, and you may also need to involve other researchers in data analysis for high reliability .

Question wording can influence your respondents’ answers, especially if the language is unclear, ambiguous, or biased. Good questions need to be understood by all respondents in the same way ( reliable ) and measure exactly what you’re interested in ( valid ).

Use clear language

You should design questions with your target audience in mind. Consider their familiarity with your questionnaire topics and language and tailor your questions to them.

For readability and clarity, avoid jargon or overly complex language. Don’t use double negatives because they can be harder to understand.

Use balanced framing

Respondents often answer in different ways depending on the question framing. Positive frames are interpreted as more neutral than negative frames and may encourage more socially desirable answers.

Positive frame Negative frame
Should protests of pandemic-related restrictions be allowed? Should protests of pandemic-related restrictions be forbidden?

Use a mix of both positive and negative frames to avoid bias , and ensure that your question wording is balanced wherever possible.

Unbalanced questions focus on only one side of an argument. Respondents may be less likely to oppose the question if it is framed in a particular direction. It’s best practice to provide a counterargument within the question as well.

Unbalanced Balanced
Do you favour …? Do you favour or oppose …?
Do you agree that …? Do you agree or disagree that …?

Avoid leading questions

Leading questions guide respondents towards answering in specific ways, even if that’s not how they truly feel, by explicitly or implicitly providing them with extra information.

It’s best to keep your questions short and specific to your topic of interest.

  • The average daily work commute in the US takes 54.2 minutes and costs $29 per day. Since 2020, working from home has saved many employees time and money. Do you favour flexible work-from-home policies even after it’s safe to return to offices?
  • Experts agree that a well-balanced diet provides sufficient vitamins and minerals, and multivitamins and supplements are not necessary or effective. Do you agree or disagree that multivitamins are helpful for balanced nutrition?

Keep your questions focused

Ask about only one idea at a time and avoid double-barrelled questions. Double-barrelled questions ask about more than one item at a time, which can confuse respondents.

This question could be difficult to answer for respondents who feel strongly about the right to clean drinking water but not high-speed internet. They might only answer about the topic they feel passionate about or provide a neutral answer instead – but neither of these options capture their true answers.

Instead, you should ask two separate questions to gauge respondents’ opinions.

Strongly Agree Agree Undecided Disagree Strongly Disagree

Do you agree or disagree that the government should be responsible for providing high-speed internet to everyone?

You can organise the questions logically, with a clear progression from simple to complex. Alternatively, you can randomise the question order between respondents.

Logical flow

Using a logical flow to your question order means starting with simple questions, such as behavioural or opinion questions, and ending with more complex, sensitive, or controversial questions.

The question order that you use can significantly affect the responses by priming them in specific directions. Question order effects, or context effects, occur when earlier questions influence the responses to later questions, reducing the validity of your questionnaire.

While demographic questions are usually unaffected by order effects, questions about opinions and attitudes are more susceptible to them.

  • How knowledgeable are you about Joe Biden’s executive orders in his first 100 days?
  • Are you satisfied or dissatisfied with the way Joe Biden is managing the economy?
  • Do you approve or disapprove of the way Joe Biden is handling his job as president?

It’s important to minimise order effects because they can be a source of systematic error or bias in your study.

Randomisation

Randomisation involves presenting individual respondents with the same questionnaire but with different question orders.

When you use randomisation, order effects will be minimised in your dataset. But a randomised order may also make it harder for respondents to process your questionnaire. Some questions may need more cognitive effort, while others are easier to answer, so a random order could require more time or mental capacity for respondents to switch between questions.

Follow this step-by-step guide to design your questionnaire.

Step 1: Define your goals and objectives

The first step of designing a questionnaire is determining your aims.

  • What topics or experiences are you studying?
  • What specifically do you want to find out?
  • Is a self-report questionnaire an appropriate tool for investigating this topic?

Once you’ve specified your research aims, you can operationalise your variables of interest into questionnaire items. Operationalising concepts means turning them from abstract ideas into concrete measurements. Every question needs to address a defined need and have a clear purpose.

Step 2: Use questions that are suitable for your sample

Create appropriate questions by taking the perspective of your respondents. Consider their language proficiency and available time and energy when designing your questionnaire.

  • Are the respondents familiar with the language and terms used in your questions?
  • Would any of the questions insult, confuse, or embarrass them?
  • Do the response items for any closed-ended questions capture all possible answers?
  • Are the response items mutually exclusive?
  • Do the respondents have time to respond to open-ended questions?

Consider all possible options for responses to closed-ended questions. From a respondent’s perspective, a lack of response options reflecting their point of view or true answer may make them feel alienated or excluded. In turn, they’ll become disengaged or inattentive to the rest of the questionnaire.

Step 3: Decide on your questionnaire length and question order

Once you have your questions, make sure that the length and order of your questions are appropriate for your sample.

If respondents are not being incentivised or compensated, keep your questionnaire short and easy to answer. Otherwise, your sample may be biased with only highly motivated respondents completing the questionnaire.

Decide on your question order based on your aims and resources. Use a logical flow if your respondents have limited time or if you cannot randomise questions. Randomising questions helps you avoid bias, but it can take more complex statistical analysis to interpret your data.

Step 4: Pretest your questionnaire

When you have a complete list of questions, you’ll need to pretest it to make sure what you’re asking is always clear and unambiguous. Pretesting helps you catch any errors or points of confusion before performing your study.

Ask friends, classmates, or members of your target audience to complete your questionnaire using the same method you’ll use for your research. Find out if any questions were particularly difficult to answer or if the directions were unclear or inconsistent, and make changes as necessary.

If you have the resources, running a pilot study will help you test the validity and reliability of your questionnaire. A pilot study is a practice run of the full study, and it includes sampling, data collection , and analysis.

You can find out whether your procedures are unfeasible or susceptible to bias and make changes in time, but you can’t test a hypothesis with this type of study because it’s usually statistically underpowered .

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

You can organise the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomisation can minimise the bias from order effects.

Questionnaires can be self-administered or researcher-administered.

Researcher-administered questionnaires are interviews that take place by phone, in person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2022, October 10). Questionnaire Design | Methods, Question Types & Examples. Scribbr. Retrieved 3 September 2024, from https://www.scribbr.co.uk/research-methods/questionnaire-design/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, doing survey research | a step-by-step guide & examples, what is a likert scale | guide & examples, reliability vs validity in research | differences, types & examples.

Developing Surveys on Questionable Research Practices: Four Challenging Design Problems

  • Open access
  • Published: 02 September 2024

Cite this article

You have full access to this open access article

research studies questionnaires

  • Christian Berggren   ORCID: orcid.org/0000-0002-4233-5138 1 ,
  • Bengt Gerdin   ORCID: orcid.org/0000-0001-8360-5387 2 &
  • Solmaz Filiz Karabag   ORCID: orcid.org/0000-0002-3863-1073 1 , 3  

2 Altmetric

The exposure of scientific scandals and the increase of dubious research practices have generated a stream of studies on Questionable Research Practices (QRPs), such as failure to acknowledge co-authors, selective presentation of findings, or removal of data not supporting desired outcomes. In contrast to high-profile fraud cases, QRPs can be investigated using quantitative, survey-based methods. However, several design issues remain to be solved. This paper starts with a review of four problems in the QRP research: the problem of precision and prevalence, the problem of social desirability bias, the problem of incomplete coverage, and the problem of controversiality, sensitivity and missing responses. Various ways to handle these problems are discussed based on a case study of the design of a large, cross-field QRP survey in the social and medical sciences in Sweden. The paper describes the key steps in the design process, including technical and cognitive testing and repeated test versions to arrive at reliable survey items on the prevalence of QRPs and hypothesized associated factors in the organizational and normative environments. Partial solutions to the four problems are assessed, unresolved issues are discussed, and tradeoffs that resist simple solutions are articulated. The paper ends with a call for systematic comparisons of survey designs and item quality to build a much-needed cumulative knowledge trajectory in the field of integrity studies.

Similar content being viewed by others

research studies questionnaires

Lies, Damned Lies, and Crafty Questionnaire Design

Dirty data: the effects of screening respondents who provide low-quality data in survey research.

research studies questionnaires

Questionable Research Practices in Single-Case Experimental Designs: Examples and Possible Solutions

Explore related subjects.

  • Medical Ethics

Avoid common mistakes on your manuscript.

Introduction

The public revelations of research fraud and non-replicable findings (Berggren & Karabag, 2019 ; Levelt et al., 2012 ; Nosek et al., 2022 ) have created a lively interest in studying research integrity. Most studies in this field tend to focus on questionable research practices, QRPs, rather than blatant fraud, which is less common and hard to study with rigorous methods (Butler et al., 2017 ). Despite the significant contributions of this research about the incidence of QRPs in various countries and contexts, several issues still need to be addressed regarding the challenges of designing precise and valid survey instruments and achieving satisfactory response rates in this sensitive area. While studies in management (Hinkin, 1998 ; Lietz, 2010 ), behavioral sciences, psychology (Breakwell et al., 2020 ), sociology (Brenner, 2020 ), and education (Hill et al., 2022 ) have provided guidelines to design surveys, they rarely discuss how to develop, test, and use surveys targeting sensitive and controversial issues such as organizational or individual corruption (Lin & Yu, 2020 ), fraud (Lawlor et al., 2021 ), and misconduct. The aim of this study is to contribute to a systematic discussion of challenges facing survey designers in these areas and, by way of a detailed case study, highlight alternative ways to increase participation and reliability of surveys focusing on questionable research practices, scientific norms, and organizational climate.

The following section starts with a literature-based review of four important problems:

the lack of conceptual consensus and precise measurements,

the problem of social desirability bias.

the difficulty of covering both quantitative and qualitative research fields.

the problem of controversiality and sensitivity.

Section 3 presents an in-depth case study of developing and implementing a survey on QRPs in the social and medical sciences in Sweden 2018–2021, designed to target these problems. Its first results were presented in this journal (Karabag et al., 2024 ). The section also describes the development process and the survey content and highlights the general design challenges. Section 4 returns to the four problems by discussing partial solutions, difficult tradeoffs, and remaining issues.

Four Design Problems in the Study of Questionable Research Practices

Extant QRP studies have generated an impressive body of knowledge regarding the occurrence and complexities of questionable practices, their increasing trend in several academic fields, and the difficulty of mitigating them with conventional interventions such as ethics courses and espousal of integrity policies (Gopalakrishna et al., 2022 ; Karabag et al., 2024 ; Necker, 2014 ). However, investigations on the prevalence of QRPs have so far lacked systematic problem analysis. Below, four main problems are discussed.

The Problem of Conceptual Clarity and Measurement Precision

Studies of QRP prevalence in the literature exhibit high levels of questionable behaviors but also considerable variation in their estimates. This is illustrated in the examples below:

“42% hade collected more data after inspecting whether results were statistically significant… and 51% had reported an unexpected finding as though it had been hypothesized from the start (HARKing)”( Fraser et al., 2018 , p. 1) , “51 , 3% of respondents engaging frequently in at least one QRP” ( Gopalakrishna et al., 2022 , p. 1) , “…one third of the researchers stated that for the express purpose of supporting hypotheses with statistical significance they engaged in post hoc exclusion of data” ( Banks et al., 2016 , p. 10).

On a general level, QRPs constitute deviations from the responsible conduct of research, that are not severe enough to be defined as fraud and fabrication (Steneck, 2006 ). Within these borders, there is no conceptual consensus regarding specific forms of QRPs (Bruton et al., 2020 ; Xie et al., 2021 ). This has resulted in a considerable variation in prevalence estimates (Agnoli et al., 2017 ; Artino et al. Jr, 2019 ; Fiedler & Schwarz, 2016 ). Many studies emphasize the role of intentionality, implying a purpose to support a specific assertion with biased evidence (Banks et al., 2016 ). This tends to be backed by reports of malpractices in quantitative research, such as p-hacking or HARKing, where unexpected findings or results from an exploratory analysis are reported as having been predicted from the start (Andrade, 2021 ). Other QRP studies, however, build on another, often implicit conceptual definition and include practices that could instead be defined as sloppy or under-resourced research, e.g. insufficient attention to equipment, deficient supervision of junior co-workers, inadequate note-keeping of the research process, or use of inappropriate research designs (Gopalakrishna et al., 2022 ). Alternatively, those studies include behaviors such as “Fashion-determined choice of research topic”, “Instrumental and marketable approach”, and “Overselling methods, data or results” (Ravn & Sørensen, 2021 , p. 30; Vermeulen & Hartmann, 2015 ) which may be opportunistic or survivalist but not necessarily involve intentions to mislead.

To shed light on the prevalence of QRPs in different environments, the first step is to conceptualize and delimit the practices to be considered. The next step is to operationalize the conceptual approach into useful indicators and, if needed, to reformulate and reword the indicators into unambiguous, easily understood items (Hinkin, 1995 , 1998 ). The importance of careful item design has been demonstrated by Fiedler and Schwarz ( 2016 ). They show how the perceived QRP prevalence changes by adding specifications to well-known QRP items. Such specifications include: “ failing to report all dependent measures that are relevant for a finding ”, “ selectively reporting studies related to a specific finding that ‘’worked’ ” (Fiedler & Schwarz, 2016 , p. 46, italics in original ), or “collecting more data after seeing whether results were significant in order to render non-significant results significant ” (Fiedler & Schwarz, 2016 , p. 49, italics in original ). These specifications demonstrate the importance of precision in item design, the need for item tests before applications in a large-scale survey, and as the case study in Sect. 3 indicates, the value of statistically analyzing the selected items post-implementation.

The Problem of Social Desirability

Case studies of publicly exposed scientific misconduct have the advantage of explicitness and possible triangulation of sources (Berggren & Karabag, 2019 ; Huistra & Paul, 2022 ). Opinions may be contradictory, but researchers/investigators may often approach a variety of stakeholders and compare oral statements with documents and other sources (Berggren & Karabag, 2019 ). By contrast, quantitative studies of QRPs need to rely on non-public sources in the form of statements and appraisals of survey respondents for the dependent variables and for potentially associated factors such as publication pressure, job insecurity, or competitive climate.

Many QRP surveys use items that target the respondents’ personal attitudes and preferences regarding the dependent variables, indicating QRP prevalence, as well as the explanatory variables. This has the advantage that the respondents presumably know their own preferences and practices. A significant disadvantage, however, concerns social desirability, which in this context means the tendency of respondents to portray themselves, sometimes inadvertently, in more positive ways than justified by their behavior. The extent of this problem was indicated in a meta-study by Fanelli ( 2009 ), which demonstrated major differences between answers to sensitive survey questions that targeted the respondents’ own behavior and questions that focused on the behavior of their colleagues. In the case study below, the pros and cons of the latter indirect approaches are analyzed.

The Problem of Covering Both Quantitative and Qualitative Research

Studies of QRP prevalence are dominated by quantitative research approaches, where there exists a common understanding of the meaning of facts, proper procedures and scientific evidence. Several research fields, also in the social and medical sciences, include qualitative approaches — case studies, interpretive inquiries, or discourse analysis — where assessments of ‘truth’ and ‘evidence’ may be different or more complex to evaluate.

This does not mean that all qualitative endeavors are equal or that deceit—such as presenting fabricated interview quotes or referring to non-existent protocols —is accepted. However, while there are defined criteria for reporting qualitative research, such as the Consolidated Criteria for Reporting Qualitative Research (COREQ) (Tong et al., 2007 ) or the Standards for Reporting Qualitative Research (SRQR checklist) (O’Brien et al., 2014 ), the field of qualitative research encompasses a wide range of different approaches. This includes comparative case studies that offer detailed evidence to support their claims—such as the differences between British and Japanese factories (Dore, 1973 /2011)—as well as discourse analyses and interpretive studies, where the concept of ‘evidence’ is more fluid and hard to apply. The generative richness of the analysis is a key component of their quality (Flick, 2013 ). This intra-field variation makes it hard to pin down and agree upon general QRP items to capture such behaviors in qualitative research. Some researchers have tried to interpret and report qualitative research by means of quantified methods (Ravn & Sørensen, 2021 ), but so far, these attempts constitute a marginal phenomenon. Consequently, the challenges of measuring the prevalence of QRPs (or similar issues) in the variegated field of qualitative research remain largely unexplored.

The Problem of Institutional Controversiality and Personal Sensitivity

Science and academia depend on public trust for funding and executing research. This makes investigations of questionable behaviors a controversial issue for universities and may lead to institutional refusal/non-response. This resistance was experienced by the designers of a large-scale survey of norms and practices in the Dutch academia when several universities decided not to take part, referring to the potential danger of negative publicity (de Vrieze, 2021 ). A Flemish survey on academic careers encountered similar participation problems (Aubert Bonn & Pinxten, 2019 ). Another study on universities’ willingness to solicit whistleblowers for participation revealed that university officers, managers, and lawyers tend to feel obligated to protect their institution’s reputation (Byrn et al., 2016 ). Such institutional actors may resist participation to avoid the exposure of potentially negative information about their institutions and management practices, which might damage the university’s brand (Byrn et al., 2016 ; Downes, 2017 ).

QRP surveys involve sensitive and potentially intrusive questions also from a respondent’s personal perspective that can lead to a reluctance to participate and non-response behavior (Roberts & John, 2014 ; Tourangeau & Yan, 2007 ). Studies show that willingness to participate declines for surveys covering sensitive issues such as misconduct, crime, and corruption, compared to less sensitive ones like leisure activities (cf. Tourangeau et al., 2010 ). The method of survey administration—whether face-to-face, over the phone, via the web, or paper-based—can influence the perceived sensitivity and response rate (Siewert & Udani, 2016 ; Szolnoki & Hoffmann, 2013 ). In the case study below, the survey did not require any institutional support. Instead, the designers focused on minimizing the individual sensitivity problem by avoiding questions about the respondents’ personal practices. To manage this, they concentrated on their colleagues’ behaviors (see Sect. 4.2). Even if a respondent agrees to participate, they may not answer the QRP items due to insufficient knowledge about her colleagues’ practices or a lack of motivation to answer critical questions about their colleagues’ practices (Beatty & Herrmann, 2002 ; Yan & Curtin, 2010 ). Additionally, a significant time gap between observing specific QRPs in the respondent’s research environment and receiving the survey may make it difficult to recall and accurately respond to the questions. Such issues may also result in non-response problems.

Addressing the Problems: Case Study of a Cross-Field QRP Survey – Design Process, Survey Content, Design Challenges

This section presents a case study of the way these four problems were addressed in a cross-field survey intended to capture QRP prevalence and associated factors across the social and medical sciences in Sweden. The account is based on the authors’ intensive involvement in the design and analysis of the survey, including the technical and cognitive testing, and post-implementation analysis of item quality, missing responses, and open respondent comments. The theoretical background and the substantive results of the study are presented in a separate paper (Karabag et al., 2024 ). Method and language experts at Statistics Sweden, a government agency responsible for public statistics in Sweden, supported the testing procedures, the stratified respondent sampling and administered the survey roll-out.

The Survey Design Process – Repeated Testing and Prototyping

The design process included four steps of testing, revising, and prototyping, which allowed the researchers to iteratively improve the survey and plan the roll-out.

Step 1: Development of the Baseline Survey

This step involved searching the literature and creating a list of alternative constructs concerning the key concepts in the planned survey. Based on the study’s aim, the first and third authors compared these constructs and examined how they had been itemized in the literature. After two rounds of discussions, they agreed on construct formulations and relevant ways to measure them, rephrased items if deemed necessary, and designed new items in areas where the extant literature did not provide any guidance. In this way, Survey Version 1 was compiled.

Step 2: Pre-Testing by Means of a Large Convenience Sample

In the second step, this survey version was reviewed by two experts in organizational behavior at Linköping University. This review led to minor adjustments and the creation of Survey Version 2 , which was used for a major pretest. The aim was both to check the quality of individual items and to garner enough responses for a factor analysis that could be used to build a preliminary theoretical model. This dual aim required a larger sample than suggested in the literature on pretesting (Perneger et al., 2015 ). At the same time, it was essential to minimize the contamination of the planned target population in Sweden. To accomplish this, the authors used their access to a community of organization scholars to administer Survey Version 2 to 200 European management researchers.

This mass pre-testing yielded 163 responses. The data were used to form preliminary factor structures and test a structural equation model. Feedback from a few of the respondents highlighted conceptual issues and duplicated questions. Survey Version 3 was developed and prepared for detailed pretesting based on this feedback.

Step 3: Focused Pre-Testing and Technical Assessment

This step focused on the pre-testing and technical assessment. The participants in this step’s pretesting were ten researchers (six in the social sciences and four in the medical sciences) at five Swedish universities: Linköping, Uppsala, Gothenburg, Gävle, and Stockholm School of Economics. Five of those researchers mainly used qualitative research methods, two used both qualitative and quantitative methods, and three used quantitative methods. In addition, Statistics Sweden conducted a technical assessment of the survey items, focusing on wording, sequence, and response options. Footnote 1 Based on feedback from the ten pretest participants and the Statistics Sweden assessment, Survey Version 4 was developed, translated into Swedish, and reviewed by two researchers with expertise in research ethics and scientific misconduct.

It should be highlighted that Swedish academia is predominantly bilingual. While most researchers have Swedish as their mother tongue, many are more proficient in English, and a minority have limited or no knowledge of Swedish. During the design process, the two language versions were compared item by item and slightly adjusted by skilled bilingual researchers. This task was relatively straightforward since most items and concepts were derived from previously published literature in English. Notably, the Swedish versions of key terms and concepts have long been utilized within Swedish academia (see for example Berggren, 2016 ; Hasselberg, 2012 ). To secure translation quality, the language was controlled by a language expert at Statistics Sweden.

Step 4: Cognitive Interviews by Survey and Measurement Experts

Next, cognitive interviews (Willis, 2004 ) were organized with eight researchers from the social and medical sciences and conducted by an expert from Statistics Sweden (Wallenborg Likidis, 2019 ). The participants included four women and four men, ranging in age from 30 to 60. They were two doctoral students, two lecturers, and four professors, representing five different universities and colleges. Additionally, two participants had a non-Nordic background. To ensure confidentiality, no connections are provided between these characteristics and the individual participants.

An effort was made to achieve a distribution of gender, age, subject, employment, and institution. Four social science researchers primarily used qualitative research methods, while the remaining four employed qualitative and quantitative methods. Additionally, four respondents completed the Swedish version of the survey, and four completed the English version.

The respondents completed the survey in the presence of a methods expert from Statistics Sweden, who observed their entire response process. The expert noted spontaneous reactions and recorded instances where respondents hesitated or struggled to understand an item. After the survey, the expert conducted a structured interview with all eight participants, addressing details in each section of the survey, including the missive for recruiting respondents. Some respondents provided oral feedback while reading the cover letter and answering the questions, while others offered feedback during the subsequent interview.

During the cognitive interview process, the methods expert continuously communicated suggestions for improvements to the design team. A detailed test protocol confirmed that most items were sufficiently strong, although a few required minor modifications. The research team then finalized Survey Version 5 , which included both English and Swedish versions (for the complete survey, see Supplementary Material S1).

Although the test successfully captured a diverse range of participants, it would have been desirable to conduct additional tests of the English survey with more non-Nordic participants; as it stands, only one such test was conducted. Despite the participants’ different approaches to completing the survey, the estimated time to complete it was approximately 15–20 min. No significant time difference was observed between completing the survey in Swedish and English.

Design Challenges – the Dearth of an Item-Specific Public Quality Discussion

The design decision to employ survey items from the relevant literature as much as possible was motivated by a desire to increase comparability with previous studies of questionable research practices. However, this approach came with several challenges. Survey-based studies of QRPs rely on the respondents’ subjective assessments, with no possibility to compare the answers with other sources. Thus, an open discussion of survey problems would be highly valuable. However, although published studies usually present the items used in the surveys, there is seldom any analysis of the problems and tradeoffs involved when using a particular type of item or response format and meager information about item validity. Few studies, for example, contain any analysis that clarifies which items that measured the targeted variables with sufficient precision and which items that failed to do so.

Another challenge when using existing survey studies is the lack of information regarding the respondents’ free-text comments about the survey’s content and quality. This could be because the survey did not contain any open questions or because the authors of the report could not statistically analyze the answers. As seen below, however, open respondent feedback on a questionnaire involving sensitive or controversial aspects may provide important feedback regarding problems that did not surface during the pretest process, which by necessity targets much smaller samples.

Survey Content

The survey started with questions about the respondent’s current employment and research environment. It ended with background questions on the respondents’ positions and the extent of their research activity, plus space for open comments about the survey. The core content of the survey consisted of sections on the organizational climate (15 items), scientific norms (13 items), good and questionable research practices (16 items), perceptions of fairness in the academic system (4 items), motivation for conducting research (8 items), ethics training and policies (5 items); and questions on the quality of the research environment and the respondent’s perceived job security.

Sample and Response Rate

All researchers, teachers, and Ph.D. students employed at Swedish universities are registered by Statistics Sweden. To ensure balanced representation and perspectives from both large universities and smaller university colleges, the institutions were divided into three strata based on the number of researchers, teachers, and Ph.D. students: more than 1,000 individuals (7 universities and university colleges), 500–999 individuals (3 institutions), and fewer than 500 individuals (29 institutions). From these strata, Statistics Sweden randomly sampled 35%, 45%, and 50% of the relevant employees, resulting in a sample of 10,047 individuals. After coverage analysis and exclusion of wrongly included, 9,626 individuals remained.

The selected individuals received a personal postal letter with a missive in both English and Swedish informing them about the project and the survey and notifying them that they could respond on paper or online. The online version provided the option to answer in either English or Swedish. The paper version was available only in English to reduce the cost of production and posting. The missive provided the recipients with comprehensive information about the study and what their involvement would entail. It emphasized the voluntary character of participation and their right to withdraw from the survey at any time, adding: “If you do not want to answer the questions , we kindly ask you to contact us. Then you will not receive any reminders.” Sixty-three individuals used this decline option. In line with standard Statistics Sweden procedures, survey completion implied an agreement to participation and to the publication of anonymized results and indicated participants’ understanding of the terms provided (Duncan & Cheng, 2021 ). An email address was provided for respondents to request study outputs or for any other reason. The survey was open for data collection for two months, during which two reminders were sent to non-responders who had not opted out.

Once Statistics Sweden had collected the answers, they were anonymized and used to generate data files delivered to the authors. Statistics Sweden also provided anonymized information about age, gender, and type of employment of each respondent in the dataset delivered to the researchers. Of the targeted individuals, 3,295 responded, amounting to an overall response rate of 34.2%. An analysis of missing value patterns revealed that 290 of the respondents either lacked data for an entire factor or had too many missing values dispersed over several survey sections. After removing these 290 responses, we used SPSS algorithms (IBM-SPSS Statistics 27) to analyze the remaining missing values, which were randomly distributed and constituted less than 5% of the data. These values were replaced using the program’s imputation program (Madley-Dowd et al., 2019 ). The final dataset consisted of 3,005 individuals, evenly distributed between female and male respondents (53,5% vs. 46,5%) and medical and social scientists (51,3% vs. 48,5%). An overview of the sample and the response rate is provided in Table  1 , which can also be found in (Karabag et al., 2024 ). As shown in Table  1 , the proportion of male and female respondents, as well as the proportion of respondents from medical and social science, and the age distribution of the respondents compared well with the original selection frame from Statistics Sweden.

Revisiting the Four Problems. Partial Solutions and Remaining Issues

Managing the precision problem - the value of factor analyses.

As noted above, the lack of conceptual consensus and standard ways to measure QRPs has resulted in a huge variation in estimated prevalence. In the case studied here, the purpose was to investigate deviations from research integrity and not low-quality research in general. This conceptual focus implied that selected survey items regarding QRP should build on the core aspect of intention, as suggested by Banks et al. ( 2016 , p. 323): “design, analytic, or reporting practices that have been questioned because of the potential for the practice to be employed with the purpose of presenting biased evidence in favor of an assertion”. After scrutinizing the literature, five items were selected as general indicators of QRP, irrespective of the research approach (see Table  2 ).

An analysis of the survey responses indicated that the general QRP indicators worked well in terms of understandability and precision. Considering the sensitive nature of the items, features that typically yield very high rates of missing data (Fanelli, 2009 ; Tourangeau & Yan, 2007 ), our missing rates of 11–21% must be considered modest. In addition, there were a few critical comments on the item formulation in the open response section at the end of the survey (see below).

Regarding the explanatory (independent) variables, the survey was inspired by studies showing the importance of the organizational climate and the normative environment within academia (Anderson et al., 2010 ). Organizational climate can be measured in several ways; the studied survey focused on items related to a collegial versus a competitive climate. The analysis of the normative environment was inspired by the classical norms of science articulated by Robert Merton in his CUDOS framework: communism (communalism), universalism, disinterestedness, and organized skepticism (Merton, 1942 /1973). This framework has been extensively discussed and challenged but remains a key reference (Anderson et al., 2010 ; Chalmers & Glasziou, 2009 ; Kim & Kim, 2018 ; Macfarlane & Cheng, 2008 ). Moreover, we were inspired by the late work of Merton on the ambivalence and ambiguities of scientists (Merton, 1942 /1973), and the counter norms suggested by Mitroff ( 1974 ). Thus, the survey involved a composite set of items to capture the contradictory normative environment in academia: classical norms as well as their counter norms.

To reduce the problems of social desirability bias and personal sensitivity, the survey design avoided items about the respondent’s personal adherence to explicit ideals, which are common in many surveys (Gopalakrishna et al., 2022 ). Instead, the studied survey focused on the normative preferences and attitudes within the respondent’s environment. This necessitated the identification, selection, and refinement of 3–4 items for each potentially relevant norm/counter-norm. The selection process was used in previous studies of norm subscription in various research communities (Anderson et al., 2007 ; Braxton, 1993 ; Bray & von Storch, 2017 ). For the norm “skepticism”, we consulted studies in the accounting literature of the three key elements of professional skepticism: questioning mind, suspension of judgment and search for knowledge (Hurtt, 2010 ).

The first analytical step after receiving the completed survey set from Statistics Sweden was to conduct a set of factor analyses to assess the quality and validity of the survey items related to the normative environment and the organizational climate. These analyses suggested three clearly identifiable factors related to the normative environment: (1) a counter norm factor combining Mitroff’s particularism and dogmatism (‘Biasedness’ in the further analysis), and two Mertonian factors: (2) Skepticism and (3) Openness, a variant of Merton’s Communalism (see Table  3 ). A fourth Merton factor, Disinterestedness, could not be identified in our analysis.

The analytical process for organizational climate involved reducing the number of items from 15 to 11 (see Table 4 ). Here, the factor analysis suggested two clearly identifiable factors, one related to collegiality and the other related to competition (see Table  4 ). Overall, the factor analyses suggested that the design efforts had paid off in terms of high item quality, robust factor loadings, and a very limited need to remove any items.

In a parallel step, the open comments were assessed as an indication of how the study was perceived by the respondents (see Table  5 ). Of the 3005 respondents, 622 provided comprehensible comments, and many of them were extensive. 187 comments were related to the respondents’ own employment/role, 120 were related to the respondents’ working conditions and research environment, and 98 were related to the academic environment and atmosphere. Problems in knowing details of collegial practices were mentioned in 82 comments.

Reducing Desirability Bias - the Challenge of Nonresponse

It is well established that studies on topics where the respondent has anything embarrassing or sensitive to report suffer from more missing responses than studies on neutral subjects and that respondents may edit the information they provide on sensitive topics (Tourangeau & Yan, 2007 ). Such a social desirability bias is applicable for QRP studies which explicitly target the respondents’ personal attitudes and behaviors. To reduce this problem, the studied survey applied a non-self-format focusing on the behaviors and preferences of the respondents’ colleagues. Relevant survey items from published studies were rephrased from self-format designs to non-self-questions about practices in the respondent’s environment, using the format: “In my research environment, colleagues…” followed by a five-step incremental response format from “(1) never” to “(5) always”. In a similar way the survey avoided “should”-statements about ideal normative values: “Scientists and scholars should critically examine…”. Instead, the survey used items intended to indicate the revealed preferences in the respondent’s normative environment regarding universalism versus particularism or openness versus secrecy.

As indicated by Fanelli ( 2009 ), these redesign efforts probably reduced the social desirability bias significantly. At the same time, however, the redesign seemed to increase a problem not discussed by Fanelli ( 2009 ): an increased uncertainty problem related to the respondents’ difficulties of knowing the practices of their colleagues in questionable areas. This issue was indicated by the open comment at the end of the studied survey, where 13% of the 622 respondents pointed out that they lacked sufficient knowledge about the behavior of their colleagues to answer the QRP questions (see Table  5 ). One respondent wrote:

“It’s difficult to answer questions about ‘colleagues in my research area’ because I don’t have an insight into their research practices; I can only make informed guesses and generalizations. Therefore, I am forced to answer ‘don’t know’ to a lot of questions”.

Regarding the questions on general QRPs, the rate of missing responses varied between 11% and 21%. As for the questions targeting specific QRP practices in quantitative and qualitative research, the rate of missing responses ranged from 38 to 49%. Unfortunately, the non-response alternative to these questions (“Don’t know/not relevant”) combined the two issues: the lack of knowledge and the lack of relevance. Thus, we don’t know what part of the missing responses related to a non-presence of the specific research approach in the respondent’s environment and what part signaled a lack of knowledge about collegial practices in this environment.

Measuring QRPs in Qualitative Research - the Limited Role of Pretests

Studies of QRP prevalence focus on quantitative research approaches, where there exists a common understanding of the interpretation of scientific evidence, clearly recommended procedures, and established QRP items related to compliance with these procedures. In the heterogenous field of qualitative research, there are several established standards for reporting the research (O’Brien et al., 2014 ; Tong et al., 2007 ), but, as noted above, hardly any commonly accepted survey items that capture behaviors that fulfill the criteria for QRPs. As a result, the studied survey project designed such items from the start during the survey development process. After technical and cognitive tests, four items were selected. See Table  6 .

Despite the series of pretests, however, the first two of these items met severe criticism from a few respondents in the survey’s open commentary section. Here, qualitative researchers argued that the items were unduly influenced by the truth claims in quantitative studies, whereas their research dealt with interpretation and discourse analysis. Thus, they rejected the items regarding selective usage of respondents and of interview quotes as indicators of questionable practices:

“The alternative regarding using quotes is a bit misleading. Supporting your results by quotes is a way to strengthen credibility in a qualitative method….” “The question about dubious practices is off target for us, who work with interpretation rather than solid truths. You can present new interpretations, but normally that does not imply that previous ‘findings’ should be considered incorrect.” “The questions regarding qualitative research were somewhat irrelevant. Often this research is not guided by a given hypothesis, and researchers may use a convenient sample without this resulting in lower quality.”

One comment focused on other problems related to qualitative research:

“Several questions do not quite capture the ethical dilemmas we wrestle with. For example , is the issue of dishonesty and ‘inaccuracies’ a little misplaced for us who work with interpretation? …At the same time , we have a lot of ethical discussions , which , for example , deal with power relations between researchers and ‘researched’ , participant observation/informal contacts and informed consent (rather than patients participating in a study)”.

Unfortunately, the survey received these comments and criticism only after the full-scale rollout and not during the pretest rounds. Thus, we had no chance to replace the contested items with other formulations or contemplate a differentiation of the subsection to target specific types of qualitative research with appropriate questions. Instead, we had to limit the post-roll-out survey analysis to the last two items in Table  6 , although they captured devious behaviors rather than gray zone practices.

Why then was this criticism of QRP items related to qualitative research not exposed in the pretest phase? This is a relevant question, also for future survey designers. An intuitive answer could be that the research team only involved quantitative researchers. However, as highlighted above, the pretest participants varied in their research methods: some exclusively used qualitative methods, others employed mixed methods, and some utilized quantitative methods. This diversity suggests that the selection of test participants was appropriate. Moreover, all three members of the research team had experience of both quantitative and qualitative studies. However, as discussed above, the field of qualitative research involves several different types of research, with different goals and methods – from detailed case studies grounded in original empirical fieldwork to participant observations of complex organizational phenomena to discursive re-interpretations of previous studies. Of the 3,005 respondents who answered the survey in a satisfactory way, only 16 respondents, or 0,5%, had any critical comments about the QRP items related to qualitative research. A failure to capture the objections from such a small proportion in a pretest phase is hardly surprising. The general problem could be compared with the challenge of detecting negative side-effects in drug development. Although the pharmaceutical firms conduct large-scale tests of candidate drugs before government approval, doctors nevertheless detect new side-effects when the medicine is rolled out to significantly more people than the test populations – and report these less frequent problems in the additional drug information (Galeano et al., 2020 ; McNeil et al., 2010 ).

In the social sciences, the purpose of pre-testing is to identify problems related to ambiguities and bias in item formulation and survey format and initiate a search for relevant solutions. A pre-test on a small, selected subsample cannot guarantee that all respondent problems during the full-scale data collection will be detected. The pretest aims to reduce errors to acceptable levels and ensure that the respondents will understand the language and terminology chosen. Pretesting in survey development is also essential to help the researchers to assess the overall flow and structure of the survey, and to make necessary adjustments to enhance respondent engagement and data quality (Ikart, 2019 ; Presser & Blair, 1994 ).

In our view, more pretests would hardly solve the epistemological challenge of formulating generally acceptable QRP items for qualitative research. The open comments studied here suggest that there is no one-size-fits-all solution. If this is right, the problem should rather be reformulated to a question of identifying different strands of qualitative research with diverse views of integrity and evidence which need to be measured with different measures. To address this challenge in a comprehensive way, however, goes far beyond the current study.

Controversiality and Collegial sensitivity - the Challenge of Predicting Nonresponse

Studies of research integrity, questionable research practices, and misconduct in science tend to be organizationally controversial and personally sensitive. If university leaders are asked to support such studies, there is a considerable risk that the answer will be negative. In the case studied here, the survey roll-out was not dependent on any active organizational participation since Statistics Sweden possessed all relevant respondent information in-house. This, we assumed, would take the controversiality problem off the agenda. Our belief was supported by the non-existent complaints regarding a potential negativity bias from the pretest participants. Instead, the problem surfaced when the survey was rolled out, and all the respondents contemplated the survey. The open comment section at the end of the survey provided insights into this reception.

Many respondents provided positive feedback, reflected in 30 different comments such as:

“Thank you for doing this survey. I really hope it will lead to changes because it is needed”. “This is an important survey. However , there are conflicting norms , such as those you cite in the survey , /concerning/ for example , data protection. How are researchers supposed to be open when we cannot share data for re-analysis?” “I am glad that the problems with egoism and non-collegiality are addressed in this manner ”.

Several of them asked for more critical questions regarding power, self-interest, and leadership:

“What I lack in the survey were items regarding academic leadership. Otherwise, I am happy that someone is doing research on these issues”. “A good survey but needs to be complemented with questions regarding researchers who put their commercial interests above research and exploit academic grants for commercial purposes”.

A small minority criticized the survey for being overly negative towards academia:

“A major part of the survey feels very negative and /conveys/ the impression that you have a strong pre-understanding of academia as a horrible environments”. “Some of the questions are uncomfortable and downright suggestive. Why such a negative attitude towards research?” “The questions have a tendency to make us /the respondents/ informers. An unpleasant feeling when you are supposed to lay information against your university”. “Many questions are hard to answer, and I feel that they measure my degree of suspicion against my closest colleagues and their motivation … Several questions I did not want to answer since they contain a negative interpretation of behaviors which I don’t consider as automatically negative”.

A few of these respondents stated that they abstained from answering some of the ‘negative questions’, since they did not want to report on or slander their colleagues. The general impact is hard to assess. Only 20% of the respondents offered open survey comments, and only seven argued that questions were “negative”. The small number explains why the issue of negativity did not show up during the testing process. However, a perceived sense of negativity may have affected the willingness to answer among more respondents than those who provided free test comments.

Conclusion - The Needs for a Cumulative Knowledge Trajectory in Integrity Studies

In the broad field of research integrity studies, investigations of QRPs in different contexts and countries play an important role. The comparability of the results, however, depends on the conceptual focus of the survey design and the quality of the survey items. This paper starts with a discussion of four common problems in QRP research: the problems of precision, social desirability, incomplete coverage, and organizational controversiality and sensitivity. This is followed by a case study of how these problems were addressed in a detailed survey design process. An assessment of the solutions employed in the studied survey design reveals progress as well as unresolved issues.

Overall, the paper shows that the problem and challenges of precision could be effectively managed through explicit conceptual definitions and careful item design.

The problem of social desirability bias was probably reduced by means of a non-self-response format referring to preferences and behaviors among colleagues instead of personal behaviors. However, an investigation of open respondent comments indicated that the reduced risk of social bias came at the expense of higher uncertainty due to the respondents’ lack of insight in the concrete practices of their colleagues.

The problem of incomplete coverage of QRPs in qualitative research, the authors initially linked to “the lack of standard items” to capture QRPs in qualitative studies. Open comments at the end of the survey, however, suggested that the lack of such standards would not be easily managed by the design of new items. Rather, it seems to be an epistemological challenge related to the multifarious nature of the qualitative research field, where the understanding of ‘evidence’ is unproblematic in some qualitative sub-fields but contested in others. This conjecture and other possible explanations will hopefully be addressed in forthcoming epistemological and empirical studies.

Regarding the problem of controversiality and sensitivity, previous studies show that QRP research is a controversial and sensitive area for academic executives and university brand managers. The case study discussed here indicates that this is a sensitive subject also for rank-and-file researchers who may hesitate to answer, even when the questions do not target the respondents’ own practices but the practices and preferences of their colleagues. Future survey designers may need to engage in framing, presenting, and balancing sensitive items to reduce respondent suspicions and minimize the rate of missing responses. Reflections on the case indicate that this is doable but requires thoughtful design, as well as repeated tests, including feedback from a broad selection of prospective participants.

In conclusion, the paper suggests that more resources should be spent on the systematic evaluation of different survey designs and item formulations. In the long term, such investments in method development will yield a higher proportion of robust and comparable studies. This would mitigate the problems discussed here and contribute to the creation of a much-needed cumulative knowledge trajectory in research integrity studies.

An issue not covered here is that surveys, however finely developed, only give quantitative information about patterns, behaviors, and structures. An understanding of underlying thoughts and perspectives requires other procedures. Thus, methods that integrate and triangulate qualitative and quantitative data —known as mixed methods (Karabag & Berggren, 2016 ; Ordu & Yılmaz, 2024 ; Smajic et al., 2022 )— may give a deeper and more complete picture of the phenomenon of QRP.

Data Availability

The data supporting the findings of this study are available from the corresponding author, upon reasonable request.

Wallenborg Likidis ( 2019 ). Academic norms and scientific attitudes: Metrology Review of a survey for doctoral students , researchers and academic teachers (In Swedish: Akademiska normer och vetenskapliga förhallningssätt. Mätteknisk granskning av en enkät till doktorander , forskare och akademiska lärare) . Prod.nr. 8,942,146, Statistics Sweden, Örebro.

Agnoli, F., Wicherts, J. M., Veldkamp, C. L., Albiero, P., & Cubelli, R. (2017). Questionable research practices among Italian research psychologists. PLoS One , 12(3), e0172792.

Anderson, M. S., Ronning, E. A., De Vries, R., & Martinson, B. C. (2007). The perverse effects of competition on scientists’ work and relationships. Science and Engineering Ethics , 13 , 437–461.

Article   Google Scholar  

Anderson, M. S., Ronning, E. A., Devries, R., & Martinson, B. C. (2010). Extending the Mertonian norms: Scientists’ subscription to norms of Research. The Journal of Higher Education , 81 (3), 366–393. https://doi.org/10.1353/jhe.0.0095

Andrade, C. (2021). HARKing, cherry-picking, p-hacking, fishing expeditions, and data dredging and mining as questionable research practices. The Journal of Clinical Psychiatry , 82 (1), 25941.

ArtinoJr, A. R., Driessen, E. W., & Maggio, L. A. (2019). Ethical shades of gray: International frequency of scientific misconduct and questionable research practices in health professions education. Academic Medicine , 94 (1), 76–84.

Aubert Bonn, N., & Pinxten, W. (2019). A decade of empirical research on research integrity: What have we (not) looked at? Journal of Empirical Research on Human Research Ethics , 14 (4), 338–352.

Banks, G. C., O’Boyle Jr, E. H., Pollack, J. M., White, C. D., Batchelor, J. H., Whelpley, C. E., & Adkins, C. L. (2016). Questions about questionable research practices in the field of management: A guest commentary. Journal of Management , 42 (1), 5–20.

Beatty, P., & Herrmann, D. (2002). To answer or not to answer: Decision processes related to survey item nonresponse. Survey Nonresponse , 71 , 86.

Google Scholar  

Berggren, C. (2016). Scientific Publishing: History, practice, and ethics (in Swedish: Vetenskaplig Publicering: Historik, Praktik Och Etik) . Studentlitteratur AB.

Berggren, C., & Karabag, S. F. (2019). Scientific misconduct at an elite medical institute: The role of competing institutional logics and fragmented control. Research Policy , 48 (2), 428–443. https://doi.org/10.1016/j.respol.2018.03.020

Braxton, J. M. (1993). Deviancy from the norms of science: The effects of anomie and alienation in the academic profession. Research in Higher Education , 54 (2), 213–228. https://www.jstor.org/stable/40196105

Bray, D., & von Storch, H. (2017). The normative orientations of climate scientists. Science and Engineering Ethics , 23 (5), 1351–1367.

Breakwell, G. M., Wright, D. B., & Barnett, J. (2020). Research questions, design, strategy and choice of methods. Research Methods in Psychology , 1–30.

Brenner, P. S. (2020). Why survey methodology needs sociology and why sociology needs survey methodology: Introduction to understanding survey methodology: Sociological theory and applications. In Understanding survey methodology: Sociological theory and applications (pp. 1–11). https://doi.org/10.1007/978-3-030-47256-6_1

Bruton, S. V., Medlin, M., Brown, M., & Sacco, D. F. (2020). Personal motivations and systemic incentives: Scientists on questionable research practices. Science and Engineering Ethics , 26 (3), 1531–1547.

Butler, N., Delaney, H., & Spoelstra, S. (2017). The gray zone: Questionable research practices in the business school. Academy of Management Learning & Education , 16 (1), 94–109.

Byrn, M. J., Redman, B. K., & Merz, J. F. (2016). A pilot study of universities’ willingness to solicit whistleblowers for participation in a study. AJOB Empirical Bioethics , 7 (4), 260–264.

Chalmers, I., & Glasziou, P. (2009). Avoidable waste in the production and reporting of research evidence. The Lancet , 374 (9683), 86–89.

de Vrieze, J. (2021). Large survey finds questionable research practices are common. Science . https://doi.org/10.1126/science.373.6552.265

Dore, R. P. (1973/2011). British Factory Japanese Factory: The origins of National Diversity in Industrial Relations, with a New Afterword . University of California Press/Routledge.

Downes, M. (2017). University scandal, reputation and governance. International Journal for Educational Integrity , 13 , 1–20.

Duncan, L. J., & Cheng, K. F. (2021). Public perception of NHS general practice during the first six months of the COVID-19 pandemic in England. F1000Research , 10 .

Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One , 4(5), e5738.

Fiedler, K., & Schwarz, N. (2016). Questionable research practices revisited. Social Psychological and Personality Science , 7 (1), 45–52.

Flick, U. (2013). The SAGE Handbook of Qualitative Data Analysis . sage.

Fraser, H., Parker, T., Nakagawa, S., Barnett, A., & Fidler, F. (2018). Questionable research practices in ecology and evolution. PLoS One , 13(7), e0200303.

Galeano, D., Li, S., Gerstein, M., & Paccanaro, A. (2020). Predicting the frequencies of drug side effects. Nature Communications , 11 (1), 4575.

Gopalakrishna, G., Ter Riet, G., Vink, G., Stoop, I., Wicherts, J. M., & Bouter, L. M. (2022). Prevalence of questionable research practices, research misconduct and their potential explanatory factors: A survey among academic researchers in the Netherlands. PLoS One , 17 (2), e0263023.

Hasselberg, Y. (2012). Science as Work: Norms and Work Organization in Commodified Science (in Swedish: Vetenskap Som arbete: Normer och arbetsorganisation i den kommodifierade vetenskapen) . Gidlunds förlag.

Hill, J., Ogle, K., Gottlieb, M., Santen, S. A., & ArtinoJr, A. R. (2022). Educator’s blueprint: a how-to guide for collecting validity evidence in survey‐based research. AEM Education and Training , 6(6), e10835.

Hinkin, T. R. (1995). A review of scale development practices in the study of organizations. Journal of Management , 21 (5), 967–988.

Hinkin, T. R. (1998). A brief tutorial on the development of measures for use in survey questionnaires. Organizational Research Methods , 1 (1), 104–121.

Huistra, P., & Paul, H. (2022). Systemic explanations of scientific misconduct: Provoked by spectacular cases of norm violation? Journal of Academic Ethics , 20 (1), 51–65.

Hurtt, R. K. (2010). Development of a scale to measure professional skepticism. Auditing: A Journal of Practice & Theory , 29 (1), 149–171.

Ikart, E. M. (2019). Survey questionnaire survey pretesting method: An evaluation of survey questionnaire via expert reviews technique. Asian Journal of Social Science Studies , 4 (2), 1.

Karabag, S. F., & Berggren, C. (2016). Misconduct, marginality and editorial practices in management, business and economics journals. PLoS One , 11 (7), e0159492. https://doi.org/10.1371/journal.pone.0159492

Karabag, S. F., Berggren, C., Pielaszkiewicz, J., & Gerdin, B. (2024). Minimizing questionable research practices–the role of norms, counter norms, and micro-organizational ethics discussion. Journal of Academic Ethics , 1–27. https://doi.org/10.1007/s10805-024-09520-z

Kim, S. Y., & Kim, Y. (2018). The ethos of Science and its correlates: An empirical analysis of scientists’ endorsement of Mertonian norms. Science Technology and Society , 23 (1), 1–24. https://doi.org/10.1177/0971721817744438

Lawlor, J., Thomas, C., Guhin, A. T., Kenyon, K., Lerner, M. D., Consortium, U., & Drahota, A. (2021). Suspicious and fraudulent online survey participation: Introducing the REAL framework. Methodological Innovations , 14 (3), 20597991211050467.

Levelt, W. J., Drenth, P., & Noort, E. (2012). Flawed science: The fraudulent research practices of social psychologist Diederik Stapel (in Dutch: Falende wetenschap: De frauduleuze onderzoekspraktijken van social-psycholoog Diederik Stapel) . Commissioned by the Tilburg University, University of Amsterdam and the University of Groningen. https://doi.org/http://hdl.handle.net/11858/00-001M-0000-0010-258A-9

Lietz, P. (2010). Research into questionnaire design: A summary of the literature. International Journal of Market Research , 52 (2), 249–272.

Lin, M. W., & Yu, C. (2020). Can corruption be measured? Comparing global versus local perceptions of corruption in East and Southeast Asia. In Regional comparisons in comparative policy analysis studies (pp. 90–107). Routledge.

Macfarlane, B., & Cheng, M. (2008). Communism, universalism and disinterestedness: Re-examining contemporary support among academics for Merton’s scientific norms. Journal of Academic Ethics , 6 , 67–78.

Madley-Dowd, P., Hughes, R., Tilling, K., & Heron, J. (2019). The proportion of missing data should not be used to guide decisions on multiple imputation. Journal of Clinical Epidemiology , 110 , 63–73.

McNeil, J. J., Piccenna, L., Ronaldson, K., & Ioannides-Demos, L. L. (2010). The value of patient-centred registries in phase IV drug surveillance. Pharmaceutical Medicine , 24 , 281–288.

Merton, R. K. (1942/1973). The normative structure of science. In The sociology of science: Theoretical and empirical investigations . The University of Chicago Press.

Mitroff, I. I. (1974). Norms and counter-norms in a select group of the Apollo Moon scientists: A case study of the ambivalence of scientists. American Sociological Review , 39 (4), 579–595. https://doi.org/10.2307/2094423

Necker, S. (2014). Scientific misbehavior in economics. Research Policy , 43 (10), 1747–1759. https://doi.org/10.1016/j.respol.2014.05.002

Nosek, B. A., Hardwicke, T. E., Moshontz, H., Allard, A., Corker, K. S., Dreber, A., & Nuijten, M. B. (2022). Replicability, robustness, and reproducibility in psychological science. Annual Review of Psychology , 73 (1), 719–748.

O’Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine , 89 (9). https://journals.lww.com/academicmedicine/fulltext/2014/09000/standards_for_reporting_qualitative_research__a.21.aspx

Ordu, Y., & Yılmaz, S. (2024). Examining the impact of dramatization simulation on nursing students’ ethical attitudes: A mixed-method study. Journal of Academic Ethics , 1–13.

Perneger, T. V., Courvoisier, D. S., Hudelson, P. M., & Gayet-Ageron, A. (2015). Sample size for pre-tests of questionnaires. Quality of life Research , 24 , 147–151.

Presser, S., & Blair, J. (1994). Survey pretesting: Do different methods produce different results? Sociological Methodology , 73–104.

Ravn, T., & Sørensen, M. P. (2021). Exploring the gray area: Similarities and differences in questionable research practices (QRPs) across main areas of research. Science and Engineering Ethics , 27 (4), 40.

Roberts, D. L., & John, F. A. S. (2014). Estimating the prevalence of researcher misconduct: a study of UK academics within biological sciences. PeerJ , 2 , e562.

Siewert, W., & Udani, A. (2016). Missouri municipal ethics survey: Do ethics measures work at the municipal level? Public Integrity , 18 (3), 269–289.

Smajic, E., Avdic, D., Pasic, A., Prcic, A., & Stancic, M. (2022). Mixed methodology of scientific research in healthcare. Acta Informatica Medica , 30 (1), 57–60. https://doi.org/10.5455/aim.2022.30.57-60

Steneck, N. H. (2006). Fostering integrity in research: Definitions, current knowledge, and future directions. Science and Engineering Ethics , 12 , 53–74.

Szolnoki, G., & Hoffmann, D. (2013). Online, face-to-face and telephone surveys—comparing different sampling methods in wine consumer research. Wine Economics and Policy , 2 (2), 57–66.

Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care , 19 (6), 349–357. https://doi.org/10.1093/intqhc/mzm042

Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin , 133 (5), 859.

Tourangeau, R., Groves, R. M., & Redline, C. D. (2010). Sensitive topics and reluctant respondents: Demonstrating a link between nonresponse bias and measurement error. Public Opinion Quarterly , 74 (3), 413–432.

Vermeulen, I., & Hartmann, T. (2015). Questionable research and publication practices in communication science. Communication Methods and Measures , 9 (4), 189–192.

Wallenborg Likidis, J. (2019). Academic norms and scientific attitudes: Metrology review of a survey for doctoral students, researchers and academic teachers (In Swedish: Akademiska normer och vetenskapliga förhallningssätt. Mätteknisk granskning av en enkät till doktorander, forskare och akademiska lärare) . Prod.nr. 8942146, Statistics Sweden, Örebro.

Willis, G. B. (2004). Cognitive interviewing: A tool for improving questionnaire design . Sage Publications.

Xie, Y., Wang, K., & Kong, Y. (2021). Prevalence of research misconduct and questionable research practices: A systematic review and meta-analysis. Science and Engineering Ethics , 27 (4), 41.

Yan, T., & Curtin, R. (2010). The relation between unit nonresponse and item nonresponse: A response continuum perspective. International Journal of Public Opinion Research , 22 (4), 535–551.

Download references

Acknowledgements

We thank Jennica Wallenborg Likidis, Statistics Sweden, for providing expert support in the survey design. We are grateful to colleagues Ingrid Johansson Mignon, Cecilia Enberg, Anna Dreber Almenberg, Andrea Fried, Sara Liin, Mariano Salazar, Lars Bengtsson, Harriet Wallberg, Karl Wennberg, and Thomas Magnusson, who joined the pretest or cognitive tests. We also thank Ksenia Onufrey, Peter Hedström, Jan-Ingvar Jönsson, Richard Öhrvall, Kerstin Sahlin, and David Ludvigsson for constructive comments or suggestions.

Open access funding provided by Linköping University. Swedish Forte: Research Council for Health, Working Life and Welfare ( https://www.vr.se/swecris?#/project/2018-00321_Forte ) Grant No. 2018-00321.

Open access funding provided by Linköping University.

Author information

Authors and affiliations.

Department of Management and Engineering [IEI], Linköping University, Linköping, SE-581 83, Sweden

Christian Berggren & Solmaz Filiz Karabag

Department of Surgical Sciences, Uppsala University, Uppsala University Hospital, entrance 70, Uppsala, SE-751 85, Sweden

Bengt Gerdin

Department of Civil and Industrial Engineering, Uppsala University, Box 169, Uppsala, SE-751 04, Sweden

Solmaz Filiz Karabag

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: CB. Survey Design: SFK, CB, Methodology: SFK, BG, CB. Visualization: SFK, BG. Funding acquisition: SFK. Project administration and management: SFK. Writing – original draft: CB. Writing – review & editing: CB, BG, SFK. Approval of the final manuscript: SFK, BG, CB.

Corresponding author

Correspondence to Solmaz Filiz Karabag .

Ethics declarations

Ethics approval and consent to participate.

The Swedish Act concerning the Ethical Review of Research Involving Humans (2003:460) defines the type of studies which requires an ethics approval. In line with the General Data Protection Regulation (EU 2016/67), the act is applicable for studies that collect personal data that reveal racial or ethnic origin, political opinions, trade union membership, religious or philosophical beliefs, or health and sexual orientation. The present study does not involve any of the above, why no formal ethical permit was required. The ethical aspects of the project and its compliance with the guidelines of the Swedish Research Council (2017) were also part of the review process at the project’s public funding agency Forte.

Competing Interests

The authors declare that they have no competing interests.

Supporting Information

The complete case study survey of social and medical science researchers in Sweden 2020.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Berggren, C., Gerdin, B. & Karabag, S.F. Developing Surveys on Questionable Research Practices: Four Challenging Design Problems. J Acad Ethics (2024). https://doi.org/10.1007/s10805-024-09565-0

Download citation

Accepted : 23 August 2024

Published : 02 September 2024

DOI : https://doi.org/10.1007/s10805-024-09565-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Questionable Research Practices
  • Normative Environment
  • Organizational Climate
  • Survey Development
  • Design Problems
  • Problem of Incomplete Coverage
  • Survey Design Process
  • Baseline Survey
  • Pre-testing
  • Technical Assessment
  • Cognitive Interviews
  • Social Desirability
  • Sensitivity
  • Organizational Controversiality
  • Challenge of Nonresponse
  • Qualitative Research
  • Quantitative Research
  • Find a journal
  • Publish with us
  • Track your research
  • Open access
  • Published: 04 September 2024

Insights into research activities of senior dental students in the Middle East: A multicenter preliminary study

  • Mohammad S. Alrashdan 1 , 2 ,
  • Abubaker Qutieshat 3 , 4 ,
  • Mohamed El-Kishawi 5 ,
  • Abdulghani Alarabi 6 ,
  • Lina Khasawneh 7 &
  • Sausan Al Kawas 1  

BMC Medical Education volume  24 , Article number:  967 ( 2024 ) Cite this article

Metrics details

Despite the increasing recognition of the importance of research in undergraduate dental education, limited studies have explored the nature of undergraduate research activities in dental schools in the Middle East region. This study aimed to evaluate the research experience of final year dental students from three dental schools in the Middle East.

A descriptive, cross-sectional study was conducted among final-year dental students from three institutions, namely Jordan University of Science and Technology, University of Sharjah (UAE), and Oman Dental College. Participants were asked about the nature and scope of their research projects, the processes involved in the research, and their perceived benefits of engaging in research.

A total of 369 respondents completed the questionnaire.  Cross-sectional studies represented the most common research type  (50.4%), with public health (29.3%) and dental education (27.9%) being the predominant domains. More than half of research proposals were developed via discussions with instructors (55.0%), and literature reviews primarily utilized PubMed (70.2%) and Google Scholar (68.5%). Regarding statistical analysis, it was usually carried out with instructor’s assistance (45.2%) or using specialized software (45.5%). The students typically concluded their projects with a manuscript (58.4%), finding the discussion section most challenging to write (42.0%). The research activity was considered highly beneficial, especially in terms of teamwork and communication skills, as well as data interpretation skills, with 74.1% of students reporting a positive impact on their research perspectives.

Conclusions

The research experience was generally positive among surveyed dental students. However, there is a need for more diversity in research domains, especially in qualitative studies, greater focus on guiding students in research activities s, especially in manuscript writing and publication. The outcomes of this study could provide valuable insights for dental schools seeking to improve their undergraduate research activities.

Peer Review reports

Introduction

The importance of research training for undergraduate dental students cannot be overstressed and many reports have thoroughly discussed the necessity of incorporating research components in the dental curricula [ 1 , 2 , 3 , 4 ]. A structured research training is crucial to ensure that dental graduates will adhere to evidence-based practices and policies in their future career and are able to critically appraise the overwhelming amount of dental and relevant medical literature so that only rigorous scientific outcomes are adopted. Furthermore, a sound research background is imperative for dental graduates to overcome some of the reported barriers to scientific evidence uptake. This includes the lack of familiarity or uncertain applicability and the lack of agreement with available evidence [ 5 ]. There is even evidence that engagement in research activities can improve the academic achievements of students [ 6 ]. Importantly, many accreditation bodies around the globe require a distinct research component with clear learning outcomes to be present in the curriculum of the dental schools [ 1 ].

Research projects and courses have become fundamental elements of modern biomedical education worldwide. The integration of research training in biomedical academic programs has evolved over the years, reflecting the growing recognition of research as a cornerstone of evidence-based practice [ 7 ]. Notwithstanding the numerous opportunities presented by the inclusion of research training in biomedical programs, it poses significant challenges such as limited resources, varying levels of student preparedness, and the need for faculty development in research mentorship [ 8 , 9 ]. Addressing these challenges is essential to maximize the benefits of research training and to ensure that all students can engage meaningfully in research activities.

While there are different models for incorporating research training into biomedical programs, including dentistry, almost all models share the common goals of equipping students with basic research skills and techniques, critical thinking training and undertaking research projects either as an elective or a summer training course, or more commonly as a compulsory course required for graduation [ 2 , 4 , 10 ].

Dental colleges in the Middle East region are not an exception and most of these colleges are continuously striving to update their curricula to improve the undergraduate research component and cultivate a research-oriented academic teaching environment. Despite these efforts, there remains a significant gap in our understanding of the nature and scope of student-led research in these institutions, the challenges they face, and the perceived benefits of their research experiences. Furthermore, a common approach in most studies in this domain is to confine data collection to a single center from a single country, which in turn limits the value of the outcomes. Therefore, it is of utmost importance to conduct studies with representative samples and preferably multiple institutions in order to address the existing knowledge gaps, to provide valuable insights that can inform future curricular improvements and to support the development of more effective research training programs in dental education across the region. Accordingly, this study was designed and conducted to elucidate some of these knowledge gaps.

The faculty of dentistry at Jordan University of Science and Technology (JUST) is the biggest in Jordan and adopts a five-year bachelor’s program in dental surgery (BDS). The faculty is home to more than 1600 undergraduate and 75 postgraduate students. The college of dental medicine at the University of Sharjah (UoS) is also the biggest in the UAE, with both undergraduate and postgraduate programs, local and international accreditation and follows a (1 + 5) program structure, whereby students need to finish a foundation year and then qualify for the five-year BDS program. Furthermore, the UoS dental college applies an integrated stream-based curriculum. Finally, Oman Dental College (ODC) is the sole dental school in Oman and represents an independent college that does not belong to a university body.

The aim of this study was to evaluate the research experience of final year dental students from three major dental schools in the Middle East, namely JUST from Jordan, UoS from the UAE, and ODC from Oman. Furthermore, the hypothesis of this study was that research activities conducted at dental schools has no perceived benefit for final year dental students.

The rationale for selecting these three dental schools stems from the diversity in the dental curriculum and program structure as well as the fact that final year BDS students are required to conduct a research project as a prerequisite for graduation in the three schools. Furthermore, the authors from these dental schools have a strong scholarly record and have been collaborating in a variety of academic and research activities.

Materials and methods

The current study is a population-based descriptive cross-sectional observational study. The study was conducted using an online self-administered questionnaire and targeted final-year dental students at three dental schools in the Middle East region: JUST from Jordan, UoS from the UAE, and ODC from Oman. The study took place in the period from January to June 2023.

For inclusion in the study, participants should have been final-year dental students at the three participating schools, have finished their research project and agreed to participate. Exclusion criteria included any students not in their final year, those who have not conducted or finished their research projects and those who refused to participate.

The study was approved by the institutional review board of JUST (Reference: 724–2022), the research ethics committee of the UoS (Reference: REC-22-02-22-3) as well as ODC (Reference: ODC-MA-2022-166). The study adhered to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines [ 11 ]. The checklist is available as a supplementary file.

Sample size determination was based on previous studies with a similar design and was further confirmed with a statistical formula. A close look at the relevant literature reveals that such studies were either targeting a single dental or medical school or multiple schools and the sample size generally ranged from 158 to 360 [ 4 , 8 , 9 , 10 , 12 ]. Furthermore, to confirm the sample size, the following 2-step formula for finite population sample size calculation was used [ 13 ]:

Wherein Z is the confidence level at 95% =1.96, P is the population proportion = 0.5, and E is the margin of error = 0.05. Based on this formula, the resultant initial sample size was 384.

Wherein n is the initial sample size = 384, N is the total population size (total number of final year dental students in the 3 schools) = 443. Based on this formula, the adjusted sample size was 206.

An online, self-administered questionnaire comprising 13 questions was designed to assess the research experience of final year dental students in the participating schools. The questionnaire was initially prepared by the first three authors and was then reviewed and approved by the other authors. The questionnaire was developed following an extensive review of relevant literature to identify the most critical aspects of research projects conducted at the dental or medical schools and the most common challenges experienced by students with regards to research project design, research components, attributes, analysis, interpretation, drafting, writing, and presentation of the final outcomes.

The questionnaire was then pretested for both face and content validity. Face validity was assessed by a pilot study that evaluated clarity, validity, and comprehensiveness in a small cohort of 30 students. Content validity was assessed by the authors, who are all experienced academics with remarkable research profiles and experience in supervising undergraduate and postgraduate research projects. The authors critically evaluated each item and made the necessary changes whenever required. Furthermore, Cronbach’s alpha was used to assess the internal consistency/ reliability of the questionnaire and the correlation between the questionnaire items was found to be 0.79. Thereafter, online invitations along with the questionnaire were sent out to a total of 443 students, 280 from JUST, 96 from UoS and 67 from ODC, which represented the total number of final year students at the three schools. A first reminder was sent 2 weeks later, and a second reminder was sent after another 2 weeks.

In addition to basic demographic details, the questionnaire comprised questions related to the type of study conducted, the scope of the research project, whether the research project was proposed by the students or the instructors or both, the literature review part of the project, the statistical analysis performed, the final presentation of the project, the writing up of the resultant manuscript if applicable, the perceived benefits of the research project and finally suggestions to improve the research component for future students.

The outcomes of the study were the students’ research experience in terms of research design, literature review, data collection, analysis, interpretation and presentation, students’ perceived benefits from research, students’ perspective towards research in their future career and students’ suggestions to improve their research experience.

The exposures were the educational and clinical experience of students, research supervision by mentors and faculty members, and participation in extracurricular activities, while the predictors were the academic performance of students, previous research experience and self-motivation.

The collected responses were entered into a Microsoft Excel spreadsheet and analyzed using SPSS Statistics software, version 20.0 (SPSS Inc., Chicago, IL, USA). Descriptive data were presented as frequencies and percentages. For this study, only descriptive statistics were carried out as the aim was not to compare and contrast the three schools but rather to provide an overview of the research activities at the participating dental schools.

The heatmap generated to represent the answers for question 11 (perceived benefits of the research activity) was created using Python programming language (Python 3.11) and the pandas, seaborn, and matplotlib libraries. The heatmap was customized to highlight the count and percentage of responses in each component, with the highest values shown in red and the lowest values shown in blue.

Potentially eligible participants in this study were all final year dental students at the three dental schools (443 students, 280 from JUST, 96 from UoS and 67 from ODC). All potentially eligible participants were confirmed to be eligible and were invited to participate in the study.

The total number of participants included in the study, i.e. the total number of students who completed the questionnaire and whose responses were analyzed, was 369 (223 from JUST, 80 from UoS and 66 from ODC). The overall response rate was 83.3% (79.6% from JUST, 83.3% from UoS and 98.5% from ODC).

The highest proportion of participants were from JUST ( n  = 223, 60.4%), followed by UoS ( n  = 80, 21.7%), and then ODC ( n  = 66, 17.9%). The majority of the participants were females ( n  = 296, 80.4%), while males represented a smaller proportion ( n  = 73, 19.6%). It is noteworthy that these proportions reflect the size of the cohorts in each college.

With regards to the type of study, half of final-year dental students in the 3 colleges participated in observational cross-sectional studies (i.e., population-based studies) ( n  = 186, 50.4%), while literature review projects were the second most common type ( n  = 83, 22.5%), followed by experimental studies ( n  = 55, 14.9%). Longitudinal studies randomized controlled trials, and other types of studies (e.g., qualitative studies, case reports) were less common, with ( n  = 5, 1.4%), ( n  = 10, 2.7%), and ( n  = 30, 8.1%) participation rates, respectively. Distribution of study types within each college is shown Fig.  1 .

figure 1

Distribution in percent of study types within each college. JUST: Jordan University of Science and Technology, UOS: University of Sharjah, ODC: Oman Dental College

The most common scope of research projects among final-year dental students was in public health/health services ( n  = 108, 29.3%) followed by dental education/attitudes of students or faculty ( n  = 103, 27.9%) (Fig.  2 ). Biomaterials/dental materials ( n  = 62, 16.8%) and restorative dentistry ( n  = 41, 11.1%) were also popular research areas. Oral diagnostic sciences (oral medicine/oral pathology/oral radiology) ( n  = 28, 7.6%), oral surgery ( n  = 12, 3.2%) and other research areas ( n  = 15, 4.1%) were less common among the participants. Thirty-two students (8.7%) were engaged in more than one research project.

figure 2

Percentages of the scope of research projects among final-year dental students. JUST: Jordan University of Science and Technology, UOS: University of Sharjah, ODC: Oman Dental College

The majority of research projects were proposed through a discussion and agreement between the students and the instructor (55.0%). Instructors proposed the topic for 36.6% of the research projects, while students proposed the topic for the remaining 8.4% of the projects.

Most dental students (79.1%) performed the literature review for their research projects using internet search engines. Material provided by the instructor was used for the literature review by 15.5% of the students, while 5.4% of the students did not perform a literature review. More than half of the students ( n  = 191, 51.7%) used multiple search engines in their literature search. The most popular search engines for literature review among dental students were PubMed (70.2% of cases) and Google Scholar (68.5% of cases). Scopus was used by 12.8% of students, while other search engines were used by 15.6% of students.

The majority of dental students ( n  = 276, 74.8%) did not utilize the university library to gain access to the required material for their research. In contrast, 93 students (25.2%) reported using the university library for this purpose.

Dental students performed statistical analysis in their projects primarily by receiving help from the instructor ( n  = 167, 45.2%) or using specialized software ( n  = 168, 45.5%). A smaller percentage of students ( n  = 34, 9.4%) consulted a professional statistician for assistance with statistical analysis. at the end of the research project, 58.4% of students ( n  = 215) presented their work in the form of a manuscript or scientific paper. Other methods of presenting the work included PowerPoint presentations ( n  = 80, 21.7%) and discussions with the instructor ( n  = 74, 19.8%).

For those students who prepared a manuscript at the conclusion of their project, the most difficult part of the writing-up was the discussion section ( n  = 155, 42.0%), followed by the methodology section ( n  = 120, 32.5%), a finding that was common across the three colleges. Fewer students found the introduction ( n  = 13, 3.6%) and conclusion ( n  = 10, 2.7%) sections to be challenging. Additionally, 71 students (19.2%) were not sure which part of the manuscript was the most difficult to prepare (Fig.  3 ).

figure 3

Percentages of the most difficult part reported by dental students during the writing-up of their projects. JUST: Jordan University of Science and Technology, UOS: University of Sharjah, ODC: Oman Dental College

The dental students’ perceived benefits from the research activity were evaluated across seven components, including literature review skills, research design skills, data collection and interpretation, manuscript writing, publication, teamwork and effective communication, and engagement in continuing professional development.

The majority of students found the research activity to be beneficial or highly beneficial in most of the areas, with the highest ratings observed in teamwork and effective communication, where 33.5% rated it as beneficial and 32.7% rated it as highly beneficial. Similarly, in the area of data collection and interpretation, 33.0% rated it as beneficial and 27.5% rated it as highly beneficial. In the areas of literature review skills and research design skills, 28.6% and 34.0% of students rated the research activity as beneficial, while 25.3% and 22.7% rated it as highly beneficial, respectively. Students also perceived the research activity to be helpful for the manuscript writing, with 27.9% rating it as beneficial and 19.2% rating it as highly beneficial.

When it comes to publication, students’ perceptions were more variable, with 22.0% rating it as beneficial and 11.3% rating it as highly beneficial. A notable 29.9% rated it as neutral, and 17.9% reported no benefit. Finally, in terms of engaging in continuing professional development, 26.8% of students rated the research activity as beneficial and 26.2% rated it as highly beneficial (Fig.  4 ).

figure 4

Heatmap of the dental students’ perceived benefits from the research activity

The research course’s impact on students’ perspectives towards being engaged in research activities or pursuing a research career after graduation was predominantly positive, wherein 274 students (74.1%) reported a positive impact on their research perspectives. However, 79 students (21.5%) felt that the course had no impact on their outlook towards research engagement or a research career. A small percentage of students ( n  = 16, 4.4%) indicated that the course had a negative impact on their perspective towards research activities or a research career after graduation.

Finally, when students were asked about their suggestions to improve research activities, they indicated the need for more training and orientation ( n  = 127, 34.6%) as well as to allow more time for students to finish their research projects ( n  = 87, 23.6%). Participation in competitions and more generous funding were believed to be less important factors to improve students` research experience ( n  = 78, 21.2% and n  = 63, 17.1%, respectively). Other factors such as external collaborations and engagement in research groups were even less important from the students` perspective (Fig.  5 ).

figure 5

Precentages of dental students’ suggestions to improve research activities at their colleges

To the best of our knowledge, this report is the first to provide a comprehensive overview of the research experience of dental students from three leading dental colleges in the Middle East region, which is home to more than 50 dental schools according to the latest SCImago Institutions Ranking ® ( https://www.scimagoir.com ). The reasonable sample size and different curricular structure across the participating colleges enhanced the value of our findings not only for dental colleges in the Middle East, but also to any dental college seeking to improve and update its undergraduate research activities. However, it is noteworthy that since the study has included only three dental schools, the generalizability of the current findings would be limited, and the outcomes are preliminary in nature.

Cross-sectional (epidemiological) studies and literature reviews represented the most common types of research among our cohort of students, which can be attributed to the feasibility, shorter time and low cost required to conduct such research projects. On the contrary, longitudinal studies and randomized trials, both known to be time consuming and meticulous, were the least common types. These findings concur with previous reports, which demonstrated that epidemiological studies are popular among undergraduate research projects [ 4 , 10 ]. In a retrospective study, Nalliah et al. also demonstrated a remarkable increase in epidemiological research concurrent with a decline in the clinical research in dental students` projects over a period of 4 years [ 4 ]. However, literature reviews, whether systematic or scoping, were not as common in some dental schools as in our cohort. For instance, a report from Sweden showed that literature reviews accounted for less than 10% of total dental students` projects [ 14 ]. Overall, qualitative research was seldom performed among our cohort, which is in agreement with a general trend in dental research that has been linked to the low level of competence and experience of dental educators to train students in qualitative research, as this requires special training in social research [ 15 , 16 ].

In terms of the research topics, public health research, research in dental education and attitudinal research were the most prevalent among our respondents. In agreement with our results, research in health care appears common in dental students` projects [ 12 ]. In general, these research domains may reflect the underlying interests of the faculty supervisors, who, in our case, were actively engaged in the selection of the research topic for more than 90% of the projects. Other areas of research, such as clinical dentistry and basic dental research are also widely reported [ 4 , 10 , 14 , 17 ].

The selection of a research domain is a critical step in undergraduate research projects, and a systematic approach in identifying research gaps and selecting appropriate research topics is indispensable and should always be given an utmost attention by supervisors [ 18 ].

More than half of the projects in the current report were reasonably selected based on a discussion between the students and the supervisor, whereas 36% were selected by the supervisors. Otuyemi et al. reported that about half of undergraduate research topics in a Nigerian dental school were selected by students themselves, however, a significant proportion of these projects (20%) were subsequently modified by supervisors [ 19 ]. The autonomy in selecting the research topic was discussed in a Swedish report, which suggested that such approach can enhance the learning experience of students, their motivation and creativity [ 20 ]. Flexibility in selecting the research topic as well as the faculty supervisor, whenever feasible, should be offered to students in order to improve their research experience and gain better outcomes [ 12 ].

Pubmed and Google Scholar were the most widely used search engines for performing a literature review. This finding is consistent with recent reviews which classify these two search systems as the most commonly used ones in biomedical research despite some critical limitations [ 21 , 22 ]. It is noteworthy that students should be competent in critical appraisal of available literature to perform the literature review efficiently. Interestingly, only 25% of students used their respective university library`s access to the search engines, which means that most students retrieved only open access publications for their literature reviews, a finding that requires attention from faculty mentors to guide students to utilize the available library services to widen their accessibility to available literature.

Statistical analysis has classically been viewed as a perceived obstacle for undergraduate students to undertake research in general [ 23 , 24 ] and recent literature has highlighted the crucial need of biomedical students to develop necessary competencies in biostatistics during their studies [ 25 ]. One obvious advantage of conducting research in our cohort is that 45.5% of students used a specialized software to analyze their data, which means that they did have at least an overview of how data are processed and analyzed to reach their final results and inferences. Unfortunately, the remaining 54.5% of students were, partially or completely, dependent on the supervisor or a professional statistician for data analysis. It is noteworthy that the research projects were appropriately tailored to the undergraduate level, focusing on fundamental statistical analysis methods. Therefore, consulting a professional statistician for more complex analyses was done only if indicated, which explains the small percentage of students who consulted a professional statistician.

Over half of participating students (58.4%) prepared a manuscript at the end of their research projects and for these students, the discussion section was identified as the most challenging to prepare, followed by the methodology section. These findings can be explained by the students’ lack of knowledge and experience related to conducting and writing-up scientific research. The same was reported by Habib et al. who found dental students’ research knowledge to be less than that of medical students [ 26 ]. The skills of critical thinking and scientific writing are believed to be of paramount importance to biomedical students and several strategies have been proposed to enhance these skills especially for both English and non-English speaking students [ 27 , 28 , 29 ].

Dental students in the current study reported positive attitude towards research and found the research activity to be beneficial in several aspects of their education, with the most significant benefits in the areas of teamwork, effective communication, data collection and interpretation, literature review skills, and research design skills. Similar findings were reported by previous studies with most of participating students reporting a positive impact of their research experience [ 4 , 10 , 12 , 30 ]. Furthermore, 74% of students found that their research experience had a positive impact on their perspectives towards engagement in research in the future. This particular finding may be promising in resolving a general lack of interest in research by dental students, as shown in a previous report from one of the participating colleges in this study (JUST), which demonstrated that only 2% of students may consider a research career in the future [ 31 ].

Notably, only 11.3% of our students perceived their research experience as being highly beneficial with regards to publication. Students` attitudes towards publishing their research appear inconsistent in literature and ranges from highly positive rates in developed countries [ 4 ] to relatively low rates in developing countries [ 8 , 32 , 33 ]. This can be attributed to lack of motivation and poor training in scientific writing skills, a finding that has prompted researchers to propose strategies to tackle such a gap as mentioned in the previous section.

Finally, key suggestions by the students to improve the research experience were the provision of more training and orientation, more time to conduct the research, as well as participation in competitions and more funding opportunities. These findings are generally in agreement with previous studies which demonstrated that dental students perceived these factors as potential barriers to improving their research experience [ 8 , 10 , 17 , 30 , 34 ].

A major limitation of the current study is the inclusion of only three dental schools from the Middle East which my limit the generalizability and validity of the findings. Furthermore, the cross-sectional nature of the study would not allow definitive conclusions to be drawn as students’ perspectives were not evaluated before and after the research project. Potential confounders in the study include the socioeconomic status of the students, the teaching environment, previous research experience, and self-motivation. Moreover, potential sources of bias include variations in the available resources and funding to students’ projects and variations in the quality of supervision provided. Another potential source of bias is the non-response bias whereby students with low academic performance or those who were not motivated might not respond to the questionnaire. This potential source of bias was managed by sending multiple reminders to students and aiming for the highest response rate and largest sample size possible.

In conclusion, the current study evaluated the key aspects of dental students’ research experience at three dental colleges in the Middle East. While there were several perceived benefits, some aspects need further reinforcement and revision including the paucity of qualitative and clinical research, the need for more rigorous supervision from mentors with focus on scientific writing skills and research presentation opportunities. Within the limitations of the current study, these outcomes can help in designing future larger scale studies and provide valuable guidance for dental colleges to foster the research component in their curricula. Further studies with larger and more representative samples are required to validate these findings and to explore other relevant elements in undergraduate dental research activities.

Data availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Emrick JJ, Gullard A. Integrating research into dental student training: a global necessity. J Dent Res. 2013;92(12):1053–5.

Article   Google Scholar  

Ramachandra SS. A comprehensive template for inclusion of research in the undergraduate dental curriculum. Health Professions Educ. 2020;6(2):264–70.

Al Sweleh FS. Integrating scientific research into undergraduate curriculum: a new direction in dental education. J Health Spec. 2016;4(1):42–5.

Nalliah RP, Lee MK, Da Silva JD, Allareddy V. Impact of a research requirement in a dental school curriculum. J Dent Educ. 2014;78(10):1364–71.

Lang ES, Wyer PC, Haynes RB. Knowledge translation: closing the evidence-to-practice gap. Ann Emerg Med. 2007;49(3):355–63.

Fechheimer M, Webber K, Kleiber PB. How well do undergraduate research programs promote engagement and success of students? CBE Life Sci Educ. 2011;10(2):156–63.

Kingsley K, O’Malley S, Stewart T, Howard KM. Research enrichment: evaluation of structured research in the curriculum for dental medicine students as part of the vertical and horizontal integration of biomedical training and discovery. BMC Med Educ. 2008;8:1–10.

Alsaleem SA, Alkhairi MAY, Alzahrani MAA, Alwadai MI, Alqahtani SSA, Alaseri YFY, et al. Challenges and Barriers toward Medical Research among Medical and Dental students at King Khalid University, Abha, Kingdom of Saudi Arabia. Front Public Health. 2021;9:706778.

Soe HHK, Than NN, Lwin H, Htay MNNN, Phyu KL, Abas AL. Knowledge, attitudes, and barriers toward research: the perspectives of undergraduate medical and dental students. J Educ Health Promotion. 2018;7(1):23.

Amir LR, Soekanto SA, Julia V, Wahono NA, Maharani DA. Impact of Undergraduate Research as a compulsory course in the Dentistry Study Program Universitas Indonesia. Dent J (Basel). 2022;10(11).

Von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP. The strengthening the reporting of Observational studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Lancet. 2007;370(9596):1453–7.

Van der Groen TA, Olsen BR, Park SE. Effects of a Research Requirement for Dental students: a retrospective analysis of students’ perspectives across ten years. J Dent Educ. 2018;82(11):1171–7.

Althubaiti A. Sample size determination: a practical guide for health researchers. J Gen Family Med. 2023;24(2):72–8.

Franzén C. The undergraduate degree project–preparing dental students for professional work and postgraduate studies? Eur J Dent Educ. 2014;18(4):207–13.

Edmunds S, Brown G. Doing qualitative research in dentistry and dental education. Eur J Dent Educ. 2012;16(2):110–7.

Moreno X. Research training in dental undergraduate curriculum in Chile. J Oral Res. 2014;3(2):95–9.

Liu H, Gong Z, Ye C, Gan X, Chen S, Li L, et al. The picture of undergraduate dental basic research education: a scoping review. BMC Med Educ. 2022;22(1):569.

Omar A, Elliott E, Sharma S. How to undertake research as a dental undergraduate. BDJ Student. 2021;28(3):17–8.

Otuyemi OD, Olaniyi EA. A 5-year retrospective evaluation of undergraduate dental research projects in a Nigerian University: graduates’ perceptions of their learning experiences. Eur J Dent Educ. 2020;24(2):292–300.

Franzén C, Brown G. Undergraduate degree projects in the Swedish dental schools: a documentary analysis. Eur J Dent Educ. 2013;17(2):122–6.

Thakre SB, Golawar SH, Thakr SS, Gawande AV. Search engines use for effective literature search in biomedical research. 2014.

Gusenbauer M, Haddaway NR. Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Res Synthesis Methods. 2020;11(2):181–217.

Lorton L, Rethman MP. Statistics: curse of the writing class. J Endod. 1990;16(1):13–8.

Leppink J. Helping medical students in their study of statistics: a flexible approach. J Taibah Univ Med Sci. 2017;12(1):1–7.

Google Scholar  

Oster RA, Enders FT. The Importance of Statistical Competencies for Medical Research Learners. J Stat Educ. 2018;26(2):137–42.

Habib SR, AlOtaibi SS, Abdullatif FA, AlAhmad IM. Knowledge and attitude of undergraduate Dental students towards Research. J Ayub Med Coll Abbottabad. 2018;30(3):443–8.

Florek AG, Dellavalle RP. Case reports in medical education: a platform for training medical students, residents, and fellows in scientific writing and critical thinking. J Med Case Rep. 2016;10:86.

Wortman-Wunder E, Wefes I. Scientific writing workshop improves confidence in critical writing skills among trainees in the Biomedical sciences. J Microbiol Biol Educ. 2020;21(1).

Barroga E, Mitoma H. Critical thinking and scientific writing skills of Non-anglophone Medical students: a model of Training Course. J Korean Med Sci. 2019;34(3):e18.

Kyaw Soe HH, Than NN, Lwin H, Nu Htay MNN, Phyu KL, Abas AL. Knowledge, attitudes, and barriers toward research: the perspectives of undergraduate medical and dental students. J Educ Health Promot. 2018;7:23.

Alrashdan MS, Alazzam M, Alkhader M, Phillips C. Career perspectives of senior dental students from different backgrounds at a single Middle Eastern institution. BMC Med Educ. 2018;18(1):283.

Chellaiyan VG, Manoharan A, Jasmine M, Liaquathali F. Medical research: perception and barriers to its practice among medical school students of Chennai. J Educ Health Promot. 2019;8:134.

Jeelani W, Aslam SM, Elahi A. Current trends in undergraduate medical and dental research: a picture from Pakistan. J Ayub Med Coll Abbottabad. 2014;26(2):162–6.

Yu W, Sun Y, Miao M, Li L, Zhang Y, Zhang L, et al. Eleven-year experience implementing a dental undergraduate research programme in a prestigious dental school in China: lessons learned and future prospects. Eur J Dent Educ. 2021;25(2):246–60.

Download references

Acknowledgements

The authors would like to acknowledge final year dental students at the three participating colleges for their time completing the questionnaire.

No funding was received for this study.

Author information

Authors and affiliations.

Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, P.O.Box: 27272, Sharjah, UAE

Mohammad S. Alrashdan & Sausan Al Kawas

Department of Oral Medicine and Oral Surgery, Faculty of Dentistry, Jordan University of Science and Technology, Irbid, Jordan

Mohammad S. Alrashdan

Department of Adult Restorative Dentistry, Oman Dental College, Muscat, Sultanate of Oman

Abubaker Qutieshat

Department of Restorative Dentistry, Dundee Dental Hospital & School, University of Dundee, Dundee, UK

Preventive and Restorative Dentistry Department, College of Dental Medicine, University of Sharjah, Sharjah, UAE

Mohamed El-Kishawi

Clinical Sciences Department, College of Dentistry, Ajman University, Ajman, UAE

Abdulghani Alarabi

Department of Prosthodontics, Faculty of Dentistry, University of Science and Technology, Irbid, Jordan

Lina Khasawneh

You can also search for this author in PubMed   Google Scholar

Contributions

M.A.: Conceptualization, data curation, project administration; supervision, validation, writing - original draft; writing - review and editing. A.Q: Conceptualization, data curation, project administration; writing - review and editing. M.E: Conceptualization, data curation, project administration; validation, writing - original draft; writing - review and editing. A.A.: data curation, writing - original draft; writing - review and editing. L.K.: Conceptualization, data curation, validation, writing - original draft; writing - review and editing. S.A: Conceptualization, writing - review and editing.

Corresponding author

Correspondence to Mohammad S. Alrashdan .

Ethics declarations

Ethics approval and consent to participate.

The current study was approved by the institutional review board of Jordan University of Science and Technology (Reference: 724–2022), the research ethics committee of the University of Sharjah (Reference: REC-22-02-22-3) and Oman Dental College (Reference: ODC-MA-2022-166).

Informed consent

Agreement to the invitation to fill out the questionnaire was considered as an implied consent to participate.

Consent for publication

not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Alrashdan, M.S., Qutieshat, A., El-Kishawi, M. et al. Insights into research activities of senior dental students in the Middle East: A multicenter preliminary study. BMC Med Educ 24 , 967 (2024). https://doi.org/10.1186/s12909-024-05955-5

Download citation

Received : 12 August 2023

Accepted : 24 August 2024

Published : 04 September 2024

DOI : https://doi.org/10.1186/s12909-024-05955-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Dental research
  • Dental education
  • Undergraduate
  • Literature review
  • Publication

BMC Medical Education

ISSN: 1472-6920

research studies questionnaires

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 04 September 2024

How to avoid sinking in swamp: exploring the intentions of digitally disadvantaged groups to use a new public infrastructure that combines physical and virtual spaces

  • Chengxiang Chu 1   na1 ,
  • Zhenyang Shen 1   na1 ,
  • Hanyi Xu 2   na1 ,
  • Qizhi Wei 1 &
  • Cong Cao   ORCID: orcid.org/0000-0003-4163-2218 1  

Humanities and Social Sciences Communications volume  11 , Article number:  1135 ( 2024 ) Cite this article

Metrics details

  • Science, technology and society

With advances in digital technology, physical and virtual spaces have gradually merged. For digitally disadvantaged groups, this transformation is both convenient and potentially supportive. Previous research on public infrastructure has been limited to improvements in physical facilities, and few researchers have investigated the use of mixed physical and virtual spaces. In this study, we focused on integrated virtual and physical spaces and investigated the factors affecting digitally disadvantaged groups’ intentions to use this new infrastructure. Building on a unified theory of the acceptance and use of technology, we focused on social interaction anxiety, identified the characteristics of digitally disadvantaged groups, and constructed a research model to examine intentions to use the new infrastructure. We obtained 337 valid data from the questionnaire and analysed them using partial least squares structural equation modelling. The results showed positive relationships between performance expectancy, perceived institutional support, perceived marketplace influence, effort expectancy, and facilitating conditions. The influence of psychological reactance was significantly negative. Finally, social interaction anxiety had a regulatory effect on performance expectancy, psychological reactance, perceived marketplace influence, and effort expectancy. Its effects on perceived institutional support and facilitating conditions were not significant. The results support the creation of inclusive smart cities by ensuring that the new public infrastructure is suitable for digitally disadvantaged groups. Meanwhile, this study presents new theoretical concepts of new public infrastructures, mixed physical and virtual spaces, which provides a forward-looking approach to studying digitally disadvantaged groups in this field and paves the way for subsequent scholars to explore the field in theory and literature.

Similar content being viewed by others

research studies questionnaires

The impact of small-scale green infrastructure on the affective wellbeing associated with urban sites

research studies questionnaires

Economic inequalities and discontent in European cities

research studies questionnaires

The appeal of cities may not wane due to the COVID-19 pandemic and remote working

Introduction.

Intelligent systems and modernisation have influenced the direction of people’s lives. With the help of continuously updated and iteratively advancing technology, modern urban construction has taken a ‘big step’ in its development. As China continues to construct smart cities, national investment in public infrastructure has steadily increased. Convenient and efficient public infrastructure has spread throughout the country, covering almost all aspects of residents’ lives and work (Guo et al. 2016 ). Previously, public infrastructure was primarily physical and located in physical spaces, but today, much of it is virtual. To achieve the goal of inclusive urban construction, the government has issued numerous relevant laws and regulations regarding public infrastructure. For example, the Chinese legislature solicited opinions from the community on the ‘Barrier-free environmental construction law of the People’s Republic of China (Draft)’.

Virtual space, based on internet technology, is a major factor in the construction of smart cities. Virtual space can be described as an interactive world built primarily on the internet (Shibusawa, 2000 ), and it has underpinned the development of national public infrastructure. In 2015, China announced its first national pilot list of smart cities, and the government began the process of building smart cities (Liu et al. 2017 ). With the continuous updating and popularisation of technologies such as the internet of things and artificial intelligence (AI) (Gu and Iop, 2020 ), virtual space is becoming widely accessible to the public. For example, in the field of government affairs, public infrastructure is now regularly developed in virtual spaces, such as on e-government platforms.

The construction of smart cities is heavily influenced by technological infrastructure (Nicolas et al. 2020 ). Currently, smart cities are being developed, and the integration of physical and virtual spaces has entered a significant stage. For example, when customers go to an offline bank to transact business, they are often asked by bank employees to use online banking software on their mobile phones, join a queue, or prove their identities. Situations such as these are neither purely virtual nor entirely physical, but in fields like banking, both options need to be considered. Therefore, we propose a new concept of mixed physical and virtual spaces in which individuals can interact, share, collaborate, coordinate with each other, and act.

Currently, new public infrastructure has emerged in mixed physical and virtual spaces, such as ‘Zheli Office’ and Alipay, in Zhejiang Province, China (as shown in Fig. 1 ). ‘Zheli Office’ is a comprehensive government application that integrates government services through digital technology, transferring some processes from offline to online and greatly improving the convenience, efficiency, and personalisation of government services. Due to its convenient payment facilities, Alipay is continuously supporting the integration of various local services, such as live payments and convenient services, and has gradually become Zhejiang’s largest living service platform. Zhejiang residents can handle almost all government and life affairs using these two applications. ‘Zheli Office’ and Alipay are key to the new public infrastructure in China, which is already leading the world in terms of a new public infrastructure that combines physical and virtual spaces; thus, China provided a valuable research context for this study.

figure 1

This figure shows the new public infrastructure has emerged in mixed physical and virtual spaces.

There is no doubt that the mixing of physical and virtual spaces is a helpful trend that makes life easier for most people. However, mixed physical and virtual spaces still have a threshold for their use, which makes it difficult for some groups to use the new public infrastructure effectively. Within society, there are people whose living conditions are restricted for physiological reasons. They may be elderly people, people with disabilities, or people who lack certain abilities. According to the results of China’s seventh (2021) national population census, there are 264.02 million elderly people aged 60 years and over in China, accounting for 18.7 per cent of the total population. China is expected to have a predominantly ageing population by around 2035. In addition, according to data released by the China Disabled Persons’ Federation, the total number of people with disabilities in China is more than 85 million, which is equivalent to one person with a disability for every 16 Chinese people. In this study, we downplay the differences between these groups, focusing only on common characteristics that hinder their use of the new public infrastructure. We collectively refer to these groups as digitally disadvantaged groups who may have difficulty adapting to the new public infrastructure integrating mixed physical and virtual spaces. This gap not only makes the new public infrastructure inconvenient for these digitally disadvantaged groups, but also leads to their exclusion and isolation from the advancing digital trend.

In the current context, in which the virtual and the real mix, digitally disadvantaged groups resemble stones in a turbulent flowing river. Although they can move forward, they do so with difficulty and will eventually be left behind. Besides facing the inherent inconveniences of new public infrastructure that integrates mixed physical and virtual spaces, digitally disadvantaged groups encounter additional obstacles. Unlike the traditional public infrastructure, the new public infrastructure requires users to log on to terminals, such as mobile phones, to engage with mixed physical and virtual spaces. However, a significant proportion of digitally disadvantaged groups cannot use the new public infrastructure effectively due to economic costs or a lack of familiarity with the technology. In addition, the use of facilities in physical and virtual mixed spaces requires engagement with numerous interactive elements, which further hinders digitally disadvantaged groups with weak social or technical skills.

The United Nations (UN) has stated the creation of ‘sustainable cities and communities’ as one of its sustainable development goals, and the construction of smart cities can help achieve this goal (Blasi et al. 2022 ). Recent studies have pointed out that the spread of COVID-19 exacerbated the marginalisation of vulnerable groups, while the lack of universal service processes and virtual facilities has created significant obstacles for digitally disadvantaged groups (Narzt et al. 2016 ; C. H. J. Wang et al. 2021 ). It should be noted that smart cities result from coordinated progress between technology and society (Al-Masri et al. 2019 ). The development of society should not be at the expense of certain people, and improving inclusiveness is key to the construction of smart cities, which should rest on people-oriented development (Ji et al. 2021 ). This paper focuses on the new public infrastructure that integrates mixed physical and virtual spaces. In it, we aim to explore how improved inclusiveness can be achieved for digitally disadvantaged groups during the construction of smart cities, and we propose the following research questions:

RQ1 . In a situation where there is a mix of physical and virtual spaces, what factors affect digitally disadvantaged groups’ use of the new public infrastructure?
RQ2 . What requirements will enable digitally disadvantaged groups to participate fully in the new public infrastructure integrating mixed physical and virtual spaces?

To answer these questions, we built a research model based on the unified theory of acceptance and use of technology (UTAUT) to explore the construction of a new public infrastructure that integrates mixed physical and virtual spaces (Venkatesh et al. 2003 ). During the research process, we focused on the attitudes, willingness, and other behavioural characteristics of digitally disadvantaged groups in relation to mixed physical and virtual spaces, aiming to ultimately provide research support for the construction of highly inclusive smart cities. Compared to existing research, this study goes further in exploring the integration and interconnection of urban public infrastructure in the process of smart city construction. We conducted empirical research to delve more deeply into the factors that influence digitally disadvantaged groups’ use of the new public infrastructure integrating mixed physical and virtual spaces. The results of this study can provide valuable guidelines and a theoretical framework for the construction of new public infrastructure and the improvement of relevant systems in mixed physical and virtual spaces. We also considered the psychological characteristics of digitally disadvantaged groups, introduced psychological reactance into the model, and used social interaction anxiety as a moderator for the model, thereby further enriching the research results regarding mixed physical and virtual spaces. This study directs social and government attention towards the issues affecting digitally disadvantaged groups in the construction of inclusive smart cities, and it has practical implications for the future digitally inclusive development of cities in China and across the world.

Theoretical background and literature review

Theoretical background of utaut.

Currently, the theories used to explore user acceptance behaviour are mainly applied separately in the online and offline fields. Theories relating to people’s offline use behaviour include the theory of planned behaviour (TPB) and the theory of reasoned action (TRA). Theories used to explore users’ online use behaviour include the technology acceptance model (TAM). Unlike previous researchers, who focused on either physical or virtual space, we focused on both. This required us to consider the characteristics of both physical and virtual spaces based on a combination of user acceptance theories (TPB, TRA, and TAM) and UTAUT, which was proposed by Venkatesh et al. ( 2003 ) in 2003. These theories have mainly been used to study the factors affecting user acceptance and the application of information technology. UTAUT integrates user acceptance theories to examine eight online and offline scenarios, thereby meeting our need for a theoretical model for this study that could include both physical and virtual spaces. UTAUT includes four key factors that directly affect users’ acceptance and usage behaviours: performance expectancy, facilitating conditions, social influence, and effort expectancy. Compared to other models, UTAUT has better interpretation and prediction capabilities for user acceptance behaviour (Venkatesh et al. 2003 ). A review of previous research showed that UTAUT has mainly been used to explore usage behaviours in online environments (Hoque and Sorwar, 2017 ) and regarding technology acceptance (Heerink et al. 2010 ). Thus, UTAUT is effective for exploring acceptance and usage behaviours. We therefore based this study on the belief that UTAUT could be applied to people’s intentions to use the new public infrastructure that integrates mixed physical and virtual spaces.

In this paper, we refine and extend UTAUT based on the characteristics of digitally disadvantaged groups, and we propose a model to explore the willingness of digitally disadvantaged groups to use the new public infrastructure integrating mixed physical and virtual spaces. We categorised possible influences on digitally disadvantaged groups’ use of the new public infrastructure into three areas: user factors, social factors, and technical factors. Among the user factors, we explored the willingness of digitally disadvantaged groups to use the new public infrastructure based on their performance expectancy and psychological reactance, as performance expectations are one of the UTAUT variables. To consider situations in which some users resist using new technologies due to cognitive bias, we combined (Hoque and Sorwar, 2017 ) showing that resistance among elderly people is a key factor affecting their adoption of mobile medical services with the theory of psychological reactance and introduced psychological reactance as an independent variable (Miron and Brehm, 2006 ). Among the social factors, we expanded the UTAUT social influence variable to include perceived institutional support and perceived marketplace influence. The new public infrastructure cannot be separated from the relevant government policies and the economic development status of the society in which it is constructed. Therefore, we aimed to explore the willingness of digitally disadvantaged people to use the new public infrastructure in terms of perceived institutional support and perceived marketplace influence. Among the technical factors, we explored the intentions of digitally disadvantaged groups to use new public infrastructure based on effort expectancy and facilitating conditions—both variables taken from UTAUT. In addition, considering that users with different levels of social interaction anxiety may have different levels of intention to use the new public infrastructure, we drew on research regarding the moderating role of consumer technological anxiety in adopting mobile shopping and introduced social interaction anxiety as a moderating variable (Yang and Forney, 2013 ). Believing that these modifications would further improve the interpretive ability of UTAUT, we considered it helpful to study the intentions of digitally disadvantaged groups to use the new public infrastructure.

Intentions to use mixed physical and virtual spaces

Many scholars have researched the factors that affect users’ willingness to use intelligent facilities, which can be broadly divided into two categories: for-profit and public welfare facilities. In the traditional business field, modern information technologies, such as the internet of things and AI, have become important means by which businesses can reduce costs and expand production. Even in traditional industries, such as agriculture (Kadylak and Cotten, 2020 ) and aquaculture (Cai et al. 2023 ), virtual technology now plays a significant role. Operators hope to use advanced technology to change traditional production and marketing models and to keep pace with new developments. However, mixed physical and virtual spaces should be inclusive for all people. Already, technological development is making it clear that no one will be able to entirely avoid mixed physical and virtual spaces. The virtualisation of public welfare facilities has gradually emerged in many areas of daily life, such as electronic health (D. D. Lee et al. 2019 ) and telemedicine (Werner and Karnieli, 2003 ). Government affairs are increasingly managed jointly in both physical and virtual spaces, resulting in an increase in e-government research (Ahn and Chen, 2022 ).

A review of the literature over the past decade showed that users’ willingness to use both for-profit and public welfare facilities is influenced by three sets of factors: user factors, social factors, and technical factors. First, regarding user factors, Bélanger and Carter ( 2008 ) pointed out that consumer trust in the government and technology are key factors affecting people’s intentions to use technology. Research on older people has shown that self-perceived ageing can have a significant impact on emotional attachment and willingness to use technology (B. A. Wang et al. 2021 ). Second, social factors include consumers’ intentions to use, which may vary significantly in different market contexts (Chiu and Hofer, 2015 ). For example, research has shown that people’s willingness to use digital healthcare tools is influenced by the attitudes of the healthcare professionals they encounter (Thapa et al. 2021 ). Third, technical factors include appropriate technical designs that help consumers use facilities more easily. Yadav et al. ( 2019 ) considered technical factors, such as ease of use, quality of service provided, and efficiency parameters, in their experiments.

The rapid development of virtual technology has inevitably drawn attention away from the physical world. Most previous researchers have focused on either virtual or physical spaces. However, scholars have noted the increasing mixing of these two spaces and have begun to study the relationships between them (Aslesen et al. 2019 ; Cocciolo, 2010 ). Wang ( 2007 ) proposed enhancing virtual environments by inserting real entities. Existing research has shown that physical and virtual spaces have begun to permeate each other in both economic and public spheres, blurring the boundaries between them (K. F. Chen et al. 2024 ; Paköz et al. 2022 ). Jakonen ( 2024 ) pointed out that, currently, with the integration of digital technologies into city building, the role of urban space in various stakeholders’ lives needs to be fully considered. The intermingling of physical and virtual spaces began to occur in people’s daily work (J. Chen et al. 2024 ) during the COVID-19 pandemic, which enhanced the integration trend (Yeung and Hao, 2024 ). The intermingling of virtual and physical spaces is a sign of social progress, but it is a considerable challenge for digitally disadvantaged people. For example, people with disabilities experience infrastructure, access, regulatory, communication, and legislative barriers when using telehealth services (Annaswamy et al. 2020 ). However, from an overall perspective, few relevant studies have considered the mixing of virtual and physical spaces.

People who are familiar with information technology, especially Generation Z, generally consider the integration of physical and virtual spaces convenient. However, for digitally disadvantaged groups, such ‘science fiction’-type changes can be disorientating and may undermine their quality of life. The elderly are an important group among the digitally disadvantaged groups referred to in this paper, and they have been the primary target of previous research on issues of inclusivity. Many researchers have considered the factors influencing older people’s willingness to use emerging technologies. For example, for the elderly, ease of use is often a prerequisite for enjoyment (Dogruel et al. 2015 ). Iancu and Iancu ( 2020 ) explored the interaction of elderly with technology, with a particular focus on mobile device design. The study emphasised that elderly people’s difficulties with technology stem from usability issues that can be addressed through improved design and appropriate training (Iancu and Iancu, 2020 ). Moreover, people with disabilities are an important group among digitally disadvantaged groups and an essential concern for the inclusive construction of cities. The rapid development of emerging technologies offers convenience to people with disabilities and has spawned many physical accessibility facilities and electronic accessibility systems (Botelho, 2021 ; Perez et al. 2023 ). Ease of use, convenience, and affordability are also key elements for enabling disadvantaged groups to use these facilities (Mogaji et al. 2023 ; Mogaji and Nguyen, 2021 ). Zander et al. ( 2023 ) explored the facilitators of and barriers to the implementation of welfare technologies for elderly people and people with disabilities. Factors such as abilities, attitudes, values, and lifestyles must be considered when planning the implementation of welfare technology for older people and people with disabilities (Zander et al. 2023 ).

In summary, scholars have conducted extensive research on the factors influencing intentions to use virtual facilities. These studies have revealed the underlying logic behind people’s adoption of virtual technology and have laid the foundations for the construction of inclusive new public infrastructure. Moreover, scholars have proposed solutions to the problems experienced by digitally disadvantaged groups in adapting to virtual facilities, but most of these scholars have focused on the elderly. Furthermore, scholars have recently conducted preliminary explorations of the mixing of physical and virtual spaces. These studies provided insights for this study, enabling us to identify both relevant background factors and current developments in the integration of virtual spaces with reality. However, most researchers have viewed the development of technology from the perspective of either virtual space or physical space, and they have rarely explored technology from the perspective of mixed physical and virtual spaces. In addition, when focusing on designs for the inclusion of digitally disadvantaged groups, scholars have mainly provided suggestions for specific practices, such as improvements in technology, hardware facilities, or device interaction interfaces, while little consideration has been given to the psychological characteristics of digitally disadvantaged groups or to the overall impact of society on these groups. Finally, in studying inclusive modernisation, researchers have generally focused on the elderly or people with disabilities, with less exploration of behavioural differences caused by factors such as social anxiety. Therefore, based on UTAUT, we explored the willingness of digitally disadvantaged groups to use the new public infrastructure integrating mixed physical and virtual spaces in a Chinese context (as shown in Fig. 2 ).

figure 2

This figure explores the willingness of digitally disadvantaged groups to use the new public infrastructure integrating mixed physical and virtual spaces in a Chinese context.

Research hypotheses

User factors.

Performance expectancy is defined as the degree to which an individual believes that using a system will help him or her achieve gains in job performance (Chao, 2019 ; Venkatesh et al. 2003 ). In this paper, performance expectancy refers to the extent to which digitally disadvantaged groups obtain tangible results from the use of the new public infrastructure. Since individuals have a strong desire to improve their work performance, they have strong intentions to use systems that can improve that performance. Previous studies in various fields have confirmed the view that high performance expectancy can effectively promote individuals’ sustained intentions to use technology (Abbad, 2021 ; Chou et al. 2010 ; S. W. Lee et al. 2019 ). For example, the role of performance expectancy was verified in a study on intentions to use e-government (Zeebaree et al. 2022 ). We believe that if digitally disadvantaged groups have confidence that the new public infrastructure will help them improve their lives or work performance, even in complex environments, such as mixed physical and virtual spaces, they will have a greater willingness to use it. Therefore, we developed the following hypothesis:

H1: Performance expectancy has a positive impact on digitally disadvantaged groups’ intentions to use the new public infrastructure integrating mixed physical and virtual spaces.

Brehm ( 1966 ) proposed the psychological reactance theory in 1966. According to this theory, when individuals perceive that their freedom to make their own choices is under threat, a motivational state to restore that freedom is awakened (Miron and Brehm, 2006 ). Psychological reactance manifests in an individual’s intentional or unintentional resistance to external factors. Previous studies have shown that when individuals are in the process of using systems or receiving information, they may have cognitive biases that lead to erroneous interpretations of the external environment, resulting in psychological reactance (Roubroeks et al. 2010 ). Surprisingly, cognitive biases may prompt individuals to experience psychological reactance, even when offered support with helpful intentions (Tian et al. 2020 ). In this paper, we define psychological resistance as the cognitive-level or psychological-level obstacles or resistance of digitally disadvantaged groups to the new public infrastructure. This resistance may be due to digitally disadvantaged groups misunderstanding the purpose or use of the new public infrastructure. For example, they may think that the new public infrastructure will harm their self-respect or personal interests. When digitally disadvantaged groups view the new public infrastructure as a threat to their status or freedom to make their own decisions, they may develop resistance to its use. Therefore, psychological reactance cannot be ignored as an important factor potentially affecting digitally disadvantaged groups’ intentions to use the new public infrastructure. Hence, we developed the following hypothesis:

H2: Psychological reactance has a negative impact on digitally disadvantaged groups’ intentions to use the new public infrastructure integrating mixed physical and virtual spaces.

Social factors

In many countries, the main providers of public infrastructure are government and public institutions (Susilawati et al. 2010 ). Government decision-making is generally based on laws or government regulations (Acharya et al. 2022 ). Government decision-making procedures affect not only the builders of infrastructure, but also the intentions of users. In life, individuals and social organisations tend to abide by and maintain social norms to ensure that their behaviours are socially attractive and acceptable (Bygrave and Minniti, 2000 ; Martins et al. 2019 ). For example, national financial policies influence the marketing effectiveness of enterprises (Chen et al. 2021 ). Therefore, we believe that perceived institutional support is a key element influencing the intentions of digitally disadvantaged groups to use the new public infrastructure. In this paper, perceived institutional support refers to digitally disadvantaged groups’ perceived policy state or government support for using the new public infrastructure, including institutional norms, laws, and regulations. Existing institutions have mainly been designed around public infrastructure that exists in physical space. We hope to explore whether perceived institutional support for digitally disadvantaged groups affects their intentions to use the new public infrastructure that integrates mixed physical and virtual spaces. Thus, we formulated the following hypothesis:

H3: Perceived institutional support has a positive impact on digitally disadvantaged groups’ intentions to use the new public infrastructure integrating mixed physical and virtual spaces.

Perceived marketplace influence is defined as actions or decisions that affect the market behaviour of consumers and organisations (Joshi et al. 2021 ; Leary et al. 2014 ). In this paper, perceived marketplace influence is defined as the behaviour of others using the new public infrastructure that affects the intentions of digitally disadvantaged groups to use it. Perceived marketplace influence increases consumers’ perceptions of market dynamics and their sense of control through the influence of other participants in the marketplace (Leary et al. 2019 ). Scholars have explored the impact of perceived marketplace influence on consumers’ purchase and use intentions in relation to fair trade and charity (Leary et al. 2019 ; Schneider and Leonard, 2022 ). Schneider and Leonard ( 2022 ) claimed that if consumers believe that their mask-wearing behaviour will motivate others around them to follow suit, then this belief will in turn motivate them to wear masks. Similarly, when digitally disadvantaged people see the people around them using the new public infrastructure, this creates an invisible market that influences their ability and motivation to try using the infrastructure themselves. Therefore, we developed the following hypotheses:

H4: Perceived marketplace influence has a positive impact on digitally disadvantaged groups’ intentions to use the new public infrastructure integrating mixed physical and virtual spaces.

Technical factors

Venkatesh et al. ( 2003 ) defined effort expectancy as the ease with which individuals can use a system. According to Tam et al. ( 2020 ), effort expectancy positively affects individuals’ performance expectancy and their sustained intentions to use mobile applications. In this paper, effort expectancy refers to the ease of use of the new public infrastructure for digitally disadvantaged groups: the higher the level of innovation and the more steps involved in using a facility, the poorer the user experience and the lower the utilisation rate (Venkatesh and Brown, 2001 ). A study on the use of AI devices for service delivery noted that the higher the level of anthropomorphism, the higher the cost of effort required by the customer to use a humanoid AI device (Gursoy et al. 2019 ). In mixed physical and virtual spaces, the design and use of new public infrastructure may become increasingly complex, negatively affecting the lives of digitally disadvantaged groups. We believe that the simpler the new public infrastructure, the more it will attract digitally disadvantaged groups to use it, while also enhancing their intentions to use it. Therefore, we formulated the following hypothesis:

H5: Effort expectancy has a positive impact on digitally disadvantaged groups’ intentions to use the new public infrastructure integrating mixed physical and virtual spaces.

Venkatesh et al. ( 2003 ) defined facilitating conditions as the degree to which an individual believes that an organisation and its technical infrastructure exist to support the use of a system. In this paper, facilitating conditions refer to the external conditions that support digitally disadvantaged groups in using the new public infrastructure, including resources, knowledge bases, skills, etc. According to Zhong et al. ( 2021 ), facilitating conditions can affect users’ attitudes towards the use of face recognition payment systems and, further, affect their intentions to use them. Moreover, scholars have shown that facilitating conditions significantly promote people’s intentions to use e-learning systems and e-government (Abbad, 2021 ; Purohit et al. 2022 ). Currently, the new public infrastructure involves mixed physical and virtual spaces, and external facilitating conditions, such as a ‘knowledge salon’ or a training session, can significantly promote digitally disadvantaged groups’ intentions and willingness to the infrastructure. Therefore, we developed the following hypothesis:

H6: Facilitating conditions have a positive impact on digitally disadvantaged groups’ intentions to use the new public infrastructure integrating a mixed physical and virtual spaces.

Moderator variable

Magee et al. ( 1996 ) claimed that social interaction anxiety is an uncomfortable emotion that some people experience in social situations, leading to avoidance, a desire for solitude, and a fear of criticism. In this paper, social interaction anxiety refers to the worries and fears of digitally disadvantaged groups about the social interactions they will be exposed to when using the new public infrastructure. Research has confirmed that people with high levels of dissatisfaction with their own bodies are more anxious in social situations (Li Mo and Bai, 2023 ). Moreover, people with high degrees of social interaction anxiety may feel uncomfortable in front of strangers or when observed by others (Zhu and Deng, 2021 ). Digitally disadvantaged groups usually have some physiological inadequacies and may be rejected by ‘normal’ groups. Previous studies have shown that the pain caused by social exclusion is positively correlated with anxiety (Davidson et al. 2019 ). Digitally disadvantaged groups may have higher degrees of dissatisfaction with their own physical abilities, which may exacerbate any social interaction anxiety they already have. We believe that high social interaction anxiety is a common characteristic of digitally disadvantaged groups, defining them as ‘different’ from other groups.

In mixed physical and virtual spaces, if the design of the new public infrastructure is not friendly and does not help digitally disadvantaged groups use it easily, their perceived social exclusion is likely to increase, resulting in a heightened sense of anxiety. However, compared with face-to-face and offline social communication, online platforms offer convenience in terms of both communication method and duration (Ali et al. 2020 ). Therefore, people with a high degree of social interaction anxiety frequently prefer and are likely to choose online social communication (Hutchins et al. 2021 ). However, digitally disadvantaged groups may be unable to avoid social interaction by using the facilities offered in virtual spaces. Therefore, we believe that influencing factors may have different effects on intentions to use the new public infrastructure, according to the different levels of social interaction anxiety experienced. Therefore, we predicted the following:

H7: Social interaction anxiety has a moderating effect on each path.

Research methodology

Research background and cases.

To better demonstrate the phenomenon of the new public infrastructure integrating mixed physical and virtual spaces, we considered the cases of ‘Zheli Office’ (as shown in Fig. 3 ) and Alipay (as shown in Fig. 4 ) to explain the two areas of government affairs and daily life affairs, which greatly affect the daily lives of residents. Examining the functions of ‘Zheli Office’ and Alipay in mixed physical and virtual spaces allowed us to provide examples of the new public infrastructure integrating mixed physical and virtual spaces.

figure 3

This figure shows the ‘Zheli Office’, it is a comprehensive government application that integrates government services through digital technology, transferring some processes from offline to online and greatly improving the convenience, efficiency, and personalisation of government services.

figure 4

This figure shows Alipay, it supports the integration of various local services, such as live payments and convenient services, and has gradually become Zhejiang’s largest living service platform.

‘Zheli Office’ provides Zhejiang residents with a channel to handle their tax affairs. Residents who need to manage their tax affairs can choose the corresponding tax department through ‘Zheli Office’ and schedule the date and time for offline processing. Residents can also upload tax-related materials directly to ‘Zheli Office’ to submit them to the tax department for preapproval. Residents only need to present the vouchers generated by ‘Zheli Office’ to the tax department at the scheduled time to manage tax affairs and undergo final review. By mitigating long waiting times and tedious tax material review steps through the transfer of processes from physical spaces to virtual spaces, ‘Zheli Office’ greatly optimises the tax declaration process and saves residents time and effort in tax declaration.

Alipay provides residents with a channel to rent shared bicycles. Residents who want to rent bicycles can enter their personal information on Alipay in advance and provide a guarantee (an Alipay credit score or deposit payment). When renting a shared bicycle offline, residents only need to scan the QR code on the bike through Alipay to unlock and use it. When returning the bike, residents can also click the return button to automatically lock the bike and pay the fee anytime and anywhere. By automating leasing procedures and fee settlement in virtual spaces, Alipay avoids the tedious operations that residents experience when renting bicycles in physical stores.

Through the preceding two examples, we demonstrate the specific performance of the integration of virtual spaces and physical spaces. The government/life affairs of residents, such as tax declarations, certificate processing, transportation, shopping, and various other affairs, all require public infrastructure support. With the emergence of new digital trends in residents’ daily lives, mixed physical and virtual spaces have produced a public infrastructure that can support residents’ daily activities in mixed physical and virtual spaces. Due to the essential differences between public infrastructure involving mixed physical and virtual spaces and traditional physical and virtual public infrastructures, we propose a new concept—new public infrastructure. This is defined as ‘a public infrastructure that supports residents in conducting daily activities in mixed physical and virtual spaces’. It is worth noting that the new public infrastructure may encompass not only the virtual spaces provided by digital applications but also the physical spaces provided by machines capable of receiving digital messages, such as smart screens, scanners, and so forth.

The UN Sustainable Development Goal Report highlights that human society needs to build sustainable cities and communities that do not sacrifice the equality of some people. Digitally disadvantaged groups should not be excluded from the sustainable development of cities due to the increasing digitalisation trend because everyone should enjoy the convenience of the new public infrastructure provided by cities. Hence, ensuring that digitally disadvantaged groups can easily and comfortably use the new public infrastructure will help promote the construction of smart cities, making them more inclusive and universal. It will also promote the development of smart cities in a more equal and sustainable direction, ensuring that everyone can enjoy the benefits of urban development. Therefore, in this article, we emphasise the importance of digitally disadvantaged groups in the construction of sustainable smart cities. Through their participation and feedback, we can build more inclusive and sustainable smart cities in the future.

Research design

The aim of this paper was to explore the specific factors that influence the intentions of digitally disadvantaged groups to use the new public infrastructure integrating mixed physical and virtual spaces, and to provide a rational explanation for the role of each factor. To achieve this goal, we first reviewed numerous relevant academic papers. This formed the basis of our research assumptions and helped determine the measurement items we included. Second, we collected data through a questionnaire survey and then analysed the data using partial least squares structural equation modelling (PLS-SEM) to explore the influence of the different factors on digitally disadvantaged groups’ intentions to use the new public infrastructure. Finally, we considered in depth the mechanisms by which the various factors influenced digitally disadvantaged groups’ intentions to use mixed physical and virtual spaces.

We distributed a structured questionnaire to collect data for the study. To ensure the reliability and validity of the questionnaire, we based the item development on the scales used in previous studies (as shown in Appendix A). The first part of the questionnaire concerned the participants’ intentions to use the new public infrastructure. Responses to this part of the questionnaire were given on a seven-point Likert scale to measure the participants’ agreement or disagreement with various statements, with 1 indicating ‘strong disagreement’ and 7 indicating ‘strong agreement’. In addition, we designed cumulative scoring questions to measure the participants’ social interaction anxiety according to Fergus’s Social Interaction Anxiety Scale (Fergus et al. 2012 ). The second part of the questionnaire concerned the demographic characteristics of the participants, including but not limited to gender, age, and education level. Participants were informed that completing the survey was voluntary and that they had the right to refuse or withdraw at any time. They were informed that the researchers would not collect any personal information that would make it possible to identify them. Only after we had obtained the participants’ consent did we commence the questionnaire survey and data collection. Since the new public infrastructure referred to in this study was quite abstract, it was not conducive to the understanding and perceptions of digitally disadvantaged groups. Therefore, to better enable the respondents to understand our concept of the new public infrastructure, we simplified it to ‘an accessible infrastructure’ and informed them about typical cases and the relevant context of this study before they began to complete the questionnaire.

Once the questionnaire design was finalised, we conducted a pretest to ensure that the questions met the basic requirements of reliability and validity and that the participants could accurately understand the questions. In the formal questionnaire survey stage, we distributed the online questionnaire to digitally disadvantaged groups based on the principle of simple random sampling and collected data through the Questionnaire Star platform. Our sampling principle was based on the following points: first, the respondents had to belong to digitally disadvantaged groups and have experienced digital divide problems; second, they had to own at least one smart device and have access to the new public infrastructure, such as via ‘Zheli Office’ or Alipay, and third, they must have used government or daily life services on ‘Zheli Office’ or Alipay at least once in the past three months. After eliminating any invalid questionnaires, 337 valid completed questionnaires remained. The demographic characteristics of the participants are shown in Table 1 . In terms of gender, 54.30% of the participants were male, and 45.70% were female. In terms of age, 64.09% of the participants were aged 18–45 years. In terms of social interaction anxiety, the data showed that 46.59% of the participants had low social interaction anxiety, and 53.41% had high social interaction anxiety.

Data analysis

PLS-SEM imposes few restrictions on the measurement scale, sample size, and residual distribution (Ringle et al. 2012 ). However, the environment in which the research object was located was relatively new, so we added two special variables—psychological reactance and perceived institutional support—to the model. The PLS-SEM model was considered suitable for conducting exploratory research on the newly constructed theory and research framework. Building on previous experience, the data analysis was divided into two stages: 1) the measurement model was used to evaluate the reliability and validity of the experiment, and 2) the structural model was used to test the study hypotheses by examining the relationships between the variables.

Measurement model

First, we tested the reliability of the model by evaluating the reliability of the constructs. As shown in Table 2 , the Cronbach’s alpha (CA) range for this study was 0.858–0.901, so both extremes were higher than the acceptable threshold (Jöreskog, 1971 ). The composite reliability (CR) scores ranged from 0.904 to 0.931; therefore, both extremes were above the threshold of 0.7 (Bagozzi and Phillips, 1982 ) (see Table 2 ).

We then assessed the validity. The test for structural validity included convergent validity and discriminant validity. Convergent validity was mainly verified by the average variance extracted (AVE) value. The recommended value for AVE is 0.5 (Kim and Park, 2013 ). In this study, the AVE values for all structures far exceeded this value (the minimum AVE value was 0.702; see Table 2 ). This result showed that the structure of this model was reliable. The Fornell–Larcker criterion is commonly used to evaluate discriminant validity; that is, the square root of the AVE should be far larger than the correlations for other constructs, meaning that each construct best explains the variance of its own construct (Hair et al. 2014 ), as shown in Table 3 . The validity of the measurement model was further evaluated by calculating the cross-loading values of the reflection construct. It can clearly be seen from Table 4 that compared with other constructs included in the structural model, the indicators of the reflection metric model had the highest loading on their potential constructs (Hair et al. 2022 ), indicating that all inspection results met the evaluation criterion for cross-loading.

In addition, we used the heterotrait-monotrait (HTMT) ratio of correlations to analyse discriminant validity (Henseler et al. 2015 ). Generally, an HTMT value greater than 0.85 indicates that there are potential discriminant validity risks (Hair et al. 2022 ), but Table 5 shows that the HTMT ratios of the correlations in this study were all lower than this value (the maximum value was 0.844).

Structural model

Figure 5 presents the evaluation results for the structural model for the whole sample. The R 2 value for the structural model in this study was 0.740; that is, the explanatory power of the model regarding intention to use was 74.00%. The first step was to ensure that there was no significant collinearity between the predicted value structures, otherwise there would be redundancy in the analysis (Hair et al. 2019 ). All VIF values in this study were between 1.743 and 2.869 and were therefore lower than the 3.3 threshold value for the collinearity test (Hair et al. 2022 ), which proved that the path coefficient had not deviated. This also proves that the model had a low probability of common method bias.

figure 5

This figure shows the evaluation results for the structural model.

As shown in Fig. 5 , performance expectation ( β  = 0.505, p  < 0.001), perceived institutional support ( β  = 0.338, p  < 0.001), perceived marketplace influence ( β  = 0.190, p  < 0.001), effort expectation ( β  = 0.176, p  < 0.001) and facilitating conditions ( β  = 0.108, p  < 0.001) all had significant and positive effects on intention to use. Moreover, the results showed that the relationship between psychological reaction ( β  = −0.271, p  < 0.001) and intention to use was negative and significant. Therefore, all the paths in this paper, except for the moderator variables, have been verified.

Multi-group analysis

To study the moderating effect between the independent variables and the dependent variables, Henseler et al. ( 2009 ) recommended using a multigroup analysis (MGA). In this study, we used MGA to analyse the moderating effect of different levels of social interaction anxiety. We designed six items for social interaction anxiety (as shown in Appendix A). According to the subjects’ responses to these six items and based on the principle of accumulation, questionnaires with scores of 6–20 indicated low social interaction anxiety, while questionnaires with scores of 28–42 indicated high social interaction anxiety. Questionnaires with scores of 21–27 were considered neutral and eliminated from the analysis involving social interaction anxiety. Based on multigroup validation factor analysis, we determined the component invariance, the configurable invariance, and the equality between compound variance and mean (Hair et al. 2019 ). As shown in Formula 1 , we used an independent sample t -test as a significance test, and a p -value below 0.05 indicated the significance of the parameters.

As shown in Table 6 , under social factors, the p -value for perceived institutional support in relation to intention to use was 0.335, which failed the significance test. This showed that there were no differences between the different degrees of social interaction anxiety. For technical factors, the p -value for facilitating conditions in relation to intention to use was 0.054, which again failed the test. This showed that there were no differences between the different levels of social interaction anxiety. However, the p -values for performance expectancy, psychological reaction, perceived marketplace influence, and effort expectancy in relation to intention to use were all less than 0.05; therefore, they passed the test for significance. This revealed that different degrees of social interaction anxiety had significant effects on these factors and that social interaction anxiety moderated some of the independent variables.

Next, we considered the path coefficients and p- values for the high and low social anxiety groups, as shown in Table 6 . First, with different levels of social anxiety, performance expectation had significantly different effects on intention to use, with low social anxiety ( β  = −0.129, p  = 0.394) failing the test and high social anxiety ( β  = 0.202, p  = 0.004) passing the test. This shows that high social anxiety levels had a greater influence of performance expectations on intention to use than low social anxiety levels. Second, psychological reactance showed significant differences in its effect on intention to use under different degrees of social anxiety, with low social anxiety ( β  = 0.184, p  = 0.065) failing the test and high social anxiety ( β  = −0.466, p  = 0.000) passing the test. Third, with different levels of social anxiety, perceived marketplace influence had significantly different effects on intention to use. Of these, perceived marketplace influence had a significant effect with low social anxiety levels ( β  = 0.312, p  = 0.001) but not with high social anxiety levels ( β  = 0.085, p  = 0.189). Finally, with differing degrees of social anxiety, expected effort had significantly different effects on intention to use. Of these, expected effort was insignificant at a low social anxiety level ( β  = −0.058, p  = 0.488), but it was significant at a high social anxiety level ( β  = 0.326, p  = 0.000). Therefore, different degrees of social interaction anxiety had significantly different effects on performance expectation, psychological reactance, perceived marketplace influence, and effort expectation.

Compared with previous studies, this study constituted a preliminary but groundbreaking exploration of mixed physical and virtual spaces. Moreover, we focused on the inclusivity problems encountered by digitally disadvantaged groups in these mixed physical and virtual spaces. We focused on performance expectancy, psychological reactance, perceived institutional support, perceived marketplace influence, effort expectancy, and facilitating conditions as the six factors, with intention to use being the measure of the perceived value of the new public infrastructure. However, digitally disadvantaged groups, depending on their own characteristics or social influences, can provoke different responses from the general population in their social interactions. Therefore, we added social interaction anxiety to the model as a moderating variable, in line with the assumed psychological characteristics of digitally disadvantaged groups. The empirical results revealed a strong correlation between influencing factors and intention to use. This shows that this model has good applicability for mixed physical and virtual spaces.

According to the empirical results, performance expectancy has a significant and positive impact on intention to use, suggesting that the mixing of the virtual and the real will create usage issues and cognitive difficulties for digitally disadvantaged groups. However, if the new public infrastructure can capitalise on the advantages of blended virtual and physical spaces, it could help users build confidence in its use, which would improve their intentions to use it. Furthermore, users’ intentions to use and high social interaction anxiety are likely to be promoted by performance expectancy. In most cases, social interaction anxiety stems from self-generated avoidance, isolation, and fear of criticism (Schultz and Heimberg, 2008 ). This may result in highly anxious digitally disadvantaged groups being reluctant to engage with others when using public facilities (Mulvale et al. 2019 ; Schou and Pors, 2019 ). However, the new public infrastructure is often unattended, which could be an advantage for users with high social anxiety. Therefore, the effect of performance expectancy in promoting intentions to use would be more significant in this group.

We also found that the psychological reactance of digitally disadvantaged groups had a reverse impact on their intentions to use technology in mixed physical and virtual spaces. However, social interaction anxiety had a moderating effect on this, such that the negative effect of psychological reactance on intention to use the new public infrastructure was more pronounced in the group with high social interaction anxiety. Facilities involving social or interactive factors may make users with high social interaction anxiety think that their autonomy is, to some extent, being violated, thus triggering subconscious resistance. The communication anxiety of digitally disadvantaged groups stems not only from the new public infrastructure itself but also from the environment in which it is used (Fang et al. 2019 ). Complex, mixed physical and virtual spaces can disrupt the habits that digitally disadvantaged groups have developed in purely physical spaces, resulting in greater anxiety (Hu et al. 2022 ), while groups with high levels of social anxiety tend to remain independent because they prefer to maintain their independence. Therefore, a high degree of social interaction anxiety will induce psychological reactance towards using the new public infrastructure.

The results of this paper shed further light on the role of social factors. In particular, the relationship between perceived institutional support and intention to use reflects the fact that perceived institutional support plays a role in promoting digitally disadvantaged groups’ intentions to use the new public infrastructure. This indicates that promotion measures need to be introduced by the government and public institutions if digitally disadvantaged groups are to accept the new public infrastructure. The development of a new public infrastructure integrating mixed physical and virtual spaces requires a high level of involvement from government institutions to facilitate the inclusive development of sustainable smart cities (Khan et al. 2020 ). An interesting finding of this study was that there were no significant differences between the effects of either high or low levels of social interaction anxiety on perceived institutional support and intention to use. This may be because social interaction anxiety mainly occurs in individuals within their close microenvironments. The policies and institutional norms of perceived institutional support tend to act at the macro level (Chen and Zhang, 2021 ; Mora et al. 2023 ), so levels of social interaction anxiety do not differ insignificantly between perceived institutional support and intentions to use the new public infrastructure.

We also found that digitally disadvantaged groups with low social interaction anxiety were more influenced by perceived marketplace influence. Consequently, they were more willing to use the new public infrastructure. When the market trend is to aggressively build a new public infrastructure, companies will accelerate their infrastructure upgrades to keep up with the trend (Hu et al. 2023 ; Liu and Zhao, 2022 ). Companies are increasingly incorporating virtual objects into familiar areas, forcing users to embrace mixed physical and virtual spaces. In addition, it is inevitable that digitally disadvantaged groups will have to use the new public infrastructure due to the market influence of people around them using this infrastructure to manage their government or life issues. When digitally disadvantaged groups with low levels of social interaction anxiety use the new public infrastructure, they are less likely to feel fearful and excluded (Kaihlanen et al. 2022 ) and will tend to be positively influenced by the use behaviours of others to use the new public infrastructure themselves (Troisi et al. 2022 ). The opposite is true for groups with high social interaction anxiety, which leads to significant differences in perceived marketplace influence and intentions to use among digitally disadvantaged groups with different levels of social interaction anxiety.

Existing mixed physical and virtual spaces exhibit exceptional technical complexity, and the results of this study affirm the importance of technical factors in affecting intentions to use. In this paper, we emphasised effort expectancy as the ease of use of the new public infrastructure (Venkatesh et al. 2003 ), which had a significant effect on digitally disadvantaged groups with high levels of social interaction anxiety but no significant effect on those with low levels of social interaction anxiety. Digitally disadvantaged groups with high levels of social interaction anxiety are likely to have a stronger sense of rejection due to environmental pressures if the new public infrastructure is too cumbersome to run or operate; they may therefore prefer using simple facilities and services. Numerous scholars have proven in educational (Hu et al. 2022 ), medical (Bai and Guo, 2022 ), business (Susanto et al. 2018 ), and other fields that good product design promotes users’ intentions to use technology (Chen et al. 2023 ). For digitally disadvantaged groups, accessible and inclusive product designs can more effectively incentivise their intentions to use the new public infrastructure (Hsu and Peng, 2022 ).

Facilitating conditions are technical factors that represent facility-related support services. The study results showed a significant positive effect of facilitating conditions on intention to use. This result is consistent with the results of previous studies regarding physical space. Professional consultation (Vinnikova et al. 2020 ) and training (Yang et al. 2023 ) on products in conventional fields can enhance users’ confidence, which can then be translated into intentions to use (Saparudin et al. 2020 ). Although the form of the new public infrastructure has changed in the direction of integration, its target object is still the user in physical space. Therefore, better facilitating conditions can enhance users’ sense of trust and promote their intentions to use (Alalwan et al. 2017 ; Mogaji et al. 2021 ). Concerning integration, because the new public infrastructure can assume multiple forms, it is difficult for digitally disadvantaged groups to know whether a particular infrastructure has good facilitating conditions. It is precisely such uncertainties that cause users with high social interaction anxiety to worry that they will be unable to use the facilities effectively. They may then worry that they will be burdened by scrutiny from strangers, causing resistance. Even when good facilitating conditions exist, groups with high social interaction anxiety do not necessarily intend to use them. Therefore, there were no significant differences between the different levels of social interaction anxiety in terms of facilitating conditions and intention to use them.

Theoretical value

In this study, we mainly examined the factors influencing digitally disadvantaged groups’ intentions to use the new public infrastructure consisting of mixed physical and virtual spaces. The empirical results of this paper make theoretical contributions to the inclusive construction of mixed spaces in several areas.

First, based on an understanding of urban development involving a deep integration of physical space with virtual space, we contextualise virtual space within the parameters of public infrastructure to shape the concept of a new public infrastructure. At the same time, by including the service system, the virtual community, and other non-physical factors in the realm where the virtual and the real are integrated, we form a concept of mixed physical and virtual spaces, which expands the scope of research related to virtual and physical spaces and provides new ideas for relevant future research.

Second, this paper makes a preliminary investigation of inclusion in the construction of the new public infrastructure and innovatively examines the factors that affect digitally disadvantaged groups’ willingness to use the mixed infrastructure, considering them in terms of individual, social, and technical factors. Moreover, holding that social interaction anxiety is consistent with the psychological characteristics of digitally disadvantaged groups, we introduce social interaction anxiety into the research field and distinguish between the performance of subjects with high social interaction anxiety and the performance of those with low social interaction anxiety. From the perspective of digitally disadvantaged groups, this shows the regulatory effect of social interaction anxiety on users’ psychology and behaviours. These preliminary findings may lead to greater attention being paid to digitally disadvantaged groups and prompt more studies on inclusion.

In addition, while conducting background research, we visited public welfare organisations and viewed government service lists to obtain first-hand information about digitally disadvantaged groups. Through our paper, we encourage the academic community to pay greater attention to theoretical research on digitally disadvantaged groups in the hope that deepening and broadening such research will promote the inclusion of digitally disadvantaged groups in the design of public infrastructure.

Practical value

Based on a large quantity of empirical research data, we explored the digital integration factors that affect users’ intentions to use the new public infrastructure. To some extent, this provides new ideas and development directions for inclusive smart city construction. Inclusion in existing cities mainly concerns the improvement of specific technologies, but the results of this study show that technological factors are only part of the picture. The government should introduce relevant policies to promptly adapt the new public infrastructure to digitally disadvantaged groups, and the legislature should enact appropriate laws. In addition, the study results can guide the design of mixed physical and virtual spaces for the new public infrastructure. Enterprises can refer to the results of this study to identify inconveniences in their existing facilities, optimise their service processes, and improve the inclusiveness of urban institutions. Furthermore, attention should be paid to the moderating role of social interaction anxiety in the process. Inclusive urban construction should not only be physical but should closely consider the inner workings of digitally disadvantaged groups. The government and enterprises should consider the specific requirements of people with high social interaction anxiety, such as by simplifying the enquiry processes in their facilities or inserting psychological comfort measures into the processes.

Limitations and future research

Due to resource and time limitations, this paper has some shortcomings. First, we considered a broad range of digitally disadvantaged groups and conducted a forward-looking exploratory study. Since we collected data through an online questionnaire, there were restrictions on the range of volunteers who responded. Only if participants met at least one of the conditions could they be identified as members of digitally disadvantaged groups and participate in a follow-up survey. To reduce the participants’ introspection and painful recollections of their disabilities or related conditions, and to avoid expected deviations from the data obtained through the survey, we made no detailed distinction between the participants’ degrees of impairment or the reasons for impairment. We adopted a twofold experimental approach.: first, a questionnaire that was too detailed might have infringed on the participants’ privacy rights, and second, since little research has been conducted on inclusiveness in relation to mixed physical and virtual spaces, this work was pioneering. Therefore, we paid greater attention to digitally disadvantaged groups’ intentions to use the new public infrastructure. In future research, we could focus on digitally disadvantaged individuals who exhibit the same deficiencies, or further increase the sample size to investigate the participants’ intentions to use the new public infrastructure in more detail.

Second, different countries have different economic development statuses and numbers of digitally disadvantaged groups. Our study mainly concerned the willingness of digitally disadvantaged groups to use the new public infrastructure in China. Therefore, in the future, the intentions of digitally disadvantaged groups to use new public infrastructures involving mixed physical and virtual spaces can be further explored in different national contexts. Furthermore, in addition to the effects of social interaction anxiety examined in this paper, future researchers could consider other moderators associated with individual differences, such as age, familiarity with technology, and disability status. We also call for more scholars to explore digitally disadvantaged groups’ use of the new public infrastructure to promote inclusive smart city construction and sustainable social development.

Previous researchers have explored users’ intentions to use virtual technology services and have analysed the factors that influence those intentions (Akdim et al. 2022 ; Liébana-Cabanillas et al. 2020 ; Nguyen and Dao, 2024 ). However, researchers have mainly focused on single virtual or physical spaces (Scavarelli et al. 2021 ; Zhang et al. 2020 ), and the topic has rarely been discussed in relation to mixed physical and virtual spaces. In addition, previous studies have mainly considered the technology perspective (Buckingham et al. 2022 ; Carney and Kandt, 2022 ), and the influence of digitally disadvantaged groups’ psychological characteristics and the effect of the overall social environment on their intentions to use have largely been ignored. To fill this gap, we constructed a UTAUT-based model for intentions to use the new public infrastructure that involved a mixing of physical and virtual spaces. We considered the mechanisms influencing digitally disadvantaged groups’ use of the new public infrastructure, considering them from the perspectives of individual, social, and technical factors. We processed and analysed 337 valid samples using PLS-SEM. The results showed that there were significant correlations between the six user factor variables and intention to use the new public infrastructure. In addition, for digitally disadvantaged groups, different degrees of social interaction anxiety had significantly different effects on the impacts of performance expectancy, psychological reactance, perceived marketplace influence, and effort expectancy on intention to use, while there were no differences in the impacts of perceived institutional support and facilitating conditions on intention to use.

In the theoretical value, we build on previous scholarly research on the conceptualisation of new public infrastructures, mixed physical and virtual spaces (Aslesen et al. 2019 ; Cocciolo, 2010 ), arguing for user, social and technological dimensions influencing the use of new public infrastructures by digitally disadvantaged groups in mixed physical and virtual spaces, and for the moderating role of social interaction anxiety. Meanwhile, this study prospectively explores the new phenomenon of digitally disadvantaged groups using new public infrastructures in mixed physical and virtual spaces, which paves the way for future scholars to explore the field both in theory and literature. In the practical value, the research findings will be helpful in promoting effective government policies and corporate designs and in prompting the development of a new public infrastructure that better meets the needs of digitally disadvantaged groups. Moreover, this study will help to direct social and government attention to the problems that exist in the use of new public infrastructures by digitally disadvantaged groups. It will have a significant implication for the future development of smart cities and urban digital inclusiveness in China and worldwide.

Data availability

The datasets generated during and/or analysed during the current study are not publicly available due to the confidentiality of the respondents’ information but are available from the corresponding author upon reasonable request for academic purposes only.

Abbad MMM (2021) Using the UTAUT model to understand students’ usage of e-learning systems in developing countries. Educ. Inf. Technol. 26(6):7205–7224. https://doi.org/10.1007/s10639-021-10573-5

Article   Google Scholar  

Acharya B, Lee J, Moon H (2022) Preference heterogeneity of local government for implementing ICT infrastructure and services through public-private partnership mechanism. Socio-Economic Plan. Sci. 79(9):101103. https://doi.org/10.1016/j.seps.2021.101103

Ahn MJ, Chen YC (2022) Digital transformation toward AI-augmented public administration: the perception of government employees and the willingness to use AI in government. Gov. Inf. Q. 39(2):101664. https://doi.org/10.1016/j.giq.2021.101664

Akdim K, Casalo LV, Flavián C (2022) The role of utilitarian and hedonic aspects in the continuance intention to use social mobile apps. J. Retail. Consum. Serv. 66:102888. https://doi.org/10.1016/j.jretconser.2021.102888

Al-Masri AN, Ijeh A, Nasir M (2019) Smart city framework development: challenges and solutions. Smart Technologies and Innovation for a Sustainable Future, Cham

Google Scholar  

Alalwan AA, Dwivedi YK, Rana NP (2017) Factors influencing adoption of mobile banking by Jordanian bank customers: extending UTAUT2 with trust. Int. J. Inf. Manag. 37(3):99–110. https://doi.org/10.1016/j.ijinfomgt.2017.01.002

Ali A, Li C, Hussain A, Bakhtawar (2020) Hedonic shopping motivations and obsessive–compulsive buying on the internet. Glob. Bus. Rev. 25(1):198–215. https://doi.org/10.1177/0972150920937535

Ali U, Mehmood A, Majeed MF, Muhammad S, Khan MK, Song HB, Malik KM (2019) Innovative citizen’s services through public cloud in Pakistan: user’s privacy concerns and impacts on adoption. Mob. Netw. Appl. 24(1):47–68. https://doi.org/10.1007/s11036-018-1132-x

Almaiah MA, Alamri MM, Al-Rahmi W (2019) Applying the UTAUT model to explain the students’ acceptance of mobile learning system in higher education. IEEE Access 7:174673–174686. https://doi.org/10.1109/access.2019.2957206

Annaswamy TM, Verduzco-Gutierrez M, Frieden L (2020) Telemedicine barriers and challenges for persons with disabilities: COVID-19 and beyond. Disabil Health J 13(4):100973. https://doi.org/10.1016/j.dhjo.2020.100973.3

Article   PubMed   PubMed Central   Google Scholar  

Aslesen HW, Martin R, Sardo S (2019) The virtual is reality! On physical and virtual space in software firms’ knowledge formation. Entrepreneurship Regional Dev. 31(9-10):669–682. https://doi.org/10.1080/08985626.2018.1552314

Bagozzi RP, Phillips LW (1982) Representing and testing organizational theories: a holistic construal. Adm. Sci. Q. 27(3):459–489. https://doi.org/10.2307/2392322

Bai B, Guo ZQ (2022) Understanding users’ continuance usage behavior towards digital health information system driven by the digital revolution under COVID-19 context: an extended UTAUT model. Psychol. Res. Behav. Manag. 15:2831–2842. https://doi.org/10.2147/prbm.S364275

Bélanger F, Carter L (2008) Trust and risk in e-government adoption. J. Strategic Inf. Syst. 17(2):165–176. https://doi.org/10.1016/j.jsis.2007.12.002

Blasi S, Ganzaroli A, De Noni I (2022) Smartening sustainable development in cities: strengthening the theoretical linkage between smart cities and SDGs. Sustain. Cities Soc. 80:103793. https://doi.org/10.1016/j.scs.2022.103793

Botelho FHF (2021) Accessibility to digital technology: virtual barriers, real opportunities. Assistive Technol. 33:27–34. https://doi.org/10.1080/10400435.2021.1945705

Brehm, JW (1966). A theory of psychological reactance . Academic Press

Buckingham SA, Walker T, Morrissey K, Smartline Project T (2022) The feasibility and acceptability of digital technology for health and wellbeing in social housing residents in Cornwall: a qualitative scoping study. Digital Health 8:20552076221074124. https://doi.org/10.1177/20552076221074124

Bygrave W, Minniti M (2000) The social dynamics of entrepreneurship. Entrepreneurship Theory Pract. 24(3):25–36. https://doi.org/10.1177/104225870002400302

Cai Y, Qi W, Yi FM (2023) Smartphone use and willingness to adopt digital pest and disease management: evidence from litchi growers in rural China. Agribusiness 39(1):131–147. https://doi.org/10.1002/agr.21766

Carney F, Kandt J (2022) Health, out-of-home activities and digital inclusion in later life: implications for emerging mobility services. Journal of Transport & Health 24:101311. https://doi.org/10.1016/j.jth.2021.101311

Chao CM (2019) Factors determining the behavioral intention to use mobile learning: an application and extension of the UTAUT model. Front. Psychol. 10:1652. https://doi.org/10.3389/fpsyg.2019.01652

Chen HY, Chen HY, Zhang W, Yang CD, Cui HX (2021) Research on marketing prediction model based on Markov Prediction. Wirel. Commun. Mob. Comput. 2021(9):4535181. https://doi.org/10.1155/2021/4535181

Chen J, Cui MY, Levinson D (2024) The cost of working: measuring physical and virtual access to jobs. Int. J. Urban Sci. 28(2):318–334. https://doi.org/10.1080/12265934.2023.2253208

Chen JX, Wang T, Fang ZY, Wang HT (2023) Research on elderly users’ intentions to accept wearable devices based on the improved UTAUT model. Front. Public Health 10(12):1035398. https://doi.org/10.3389/fpubh.2022.1035398

Chen KF, Guaralda M, Kerr J, Turkay S (2024) Digital intervention in the city: a conceptual framework for digital placemaking. Urban Des. Int. 29(1):26–38. https://doi.org/10.1057/s41289-022-00203-y

Chen L, Zhang H (2021) Strategic authoritarianism: the political cycles and selectivity of China’s tax-break policy. Am. J. Political Sci. 65(4):845–861. https://doi.org/10.1111/ajps.12648

Chiu YTH, Hofer KM (2015) Service innovation and usage intention: a cross-market analysis. J. Serv. Manag. 26(3):516–538. https://doi.org/10.1108/josm-10-2014-0274

Chou SW, Min HT, Chang YC, Lin CT (2010) Understanding continuance intention of knowledge creation using extended expectation-confirmation theory: an empirical study of Taiwan and China online communities. Behav. Inf. Technol. 29(6):557–570. https://doi.org/10.1080/01449290903401986

Cocciolo A (2010) Alleviating physical space constraints using virtual space? A study from an urban academic library. Libr. Hi Tech. 28(4):523–535. https://doi.org/10.1108/07378831011096204

Davidson CA, Willner CJ, van Noordt SJR, Banz BC, Wu J, Kenney JG, Johannesen JK, Crowley MJ (2019) One-month stability of cyberball post-exclusion ostracism distress in Adolescents. J. Psychopathol. Behav. Assess. 41(3):400–408. https://doi.org/10.1007/s10862-019-09723-4

Dogruel L, Joeckel S, Bowman ND (2015) The use and acceptance of new media entertainment technology by elderly users: development of an expanded technology acceptance model. Behav. Inf. Technol. 34(11):1052–1063. https://doi.org/10.1080/0144929x.2015.1077890

Fang ML, Canham SL, Battersby L, Sixsmith J, Wada M, Sixsmith A (2019) Exploring privilege in the digital divide: implications for theory, policy, and practice. Gerontologist 59(1):E1–E15. https://doi.org/10.1093/geront/gny037

Article   PubMed   Google Scholar  

Fergus TA, Valentiner DP, McGrath PB, Gier-Lonsway SL, Kim HS (2012) Short forms of the social interaction anxiety scale and the social phobia scale. J. Personal. Assess. 94(3):310–320. https://doi.org/10.1080/00223891.2012.660291

Garone A, Pynoo B, Tondeur J, Cocquyt C, Vanslambrouck S, Bruggeman B, Struyven K (2019) Clustering university teaching staff through UTAUT: implications for the acceptance of a new learning management system. Br. J. Educ. Technol. 50(5):2466–2483. https://doi.org/10.1111/bjet.12867

Gu QH, Iop (2020) Frame-based conceptual model of smart city’s applications in China. International Conference on Green Development and Environmental Science and Technology (ICGDE), Changsha, CHINA

Book   Google Scholar  

Guo MJ, Liu YH, Yu HB, Hu BY, Sang ZQ (2016) An overview of smart city in China. China Commun. 13(5):203–211. https://doi.org/10.1109/cc.2016.7489987

Gursoy D, Chi OHX, Lu L, Nunkoo R (2019) Consumers acceptance of artificially intelligent (AI) device use in service delivery. Int. J. Inf. Manag. 49:157–169. https://doi.org/10.1016/j.ijinfomgt.2019.03.008

Hair, JF, Hult, GTM, Ringle, CM, & Sarstedt, M (2022). A primer on partial least squares structural equation modeling (PLS-SEM) (Third edition. ed.). SAGE Publications, Inc

Hair Jr JF, Sarstedt M, Hopkins L, Kuppelwieser VG (2014) Partial least squares structural equation modeling (PLS-SEM): an emerging tool in business research. Eur. Bus. Rev. 26(2):106–121. https://doi.org/10.1108/ebr-10-2013-0128

Hair JF, Risher JJ, Sarstedt M, Ringle CM (2019) When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 31(1):2–24. https://doi.org/10.1108/ebr-11-2018-0203

Heerink M, Kröse B, Evers V, Wielinga B (2010) Assessing acceptance of assistive social agent technology by older adults: the almere model. Int. J. Soc. Robot. 2(4):361–375. https://doi.org/10.1007/s12369-010-0068-5

Henseler J, Ringle CM, Sarstedt M (2015) A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 43(1):115–135. https://doi.org/10.1007/s11747-014-0403-8

Henseler, J, Ringle, CM, & Sinkovics, RR (2009). The use of partial least squares path modeling in international marketing. In RR Sinkovics & PN Ghauri (Eds.), New Challenges to International Marketing (Vol. 20, pp. 277-319). Emerald Group Publishing Limited. https://doi.org/10.1108/S1474-7979 (2009)0000020014

Hoque R, Sorwar G (2017) Understanding factors influencing the adoption of mHealth by the elderly: an extension of the UTAUT model. Int. J. Med. Inform. 101:75–84. https://doi.org/10.1016/j.ijmedinf.2017.02.002

Hsu CW, Peng CC (2022) What drives older adults’ use of mobile registration apps in Taiwan? An investigation using the extended UTAUT model. Inform. Health Soc. Care 47(3):258–273. https://doi.org/10.1080/17538157.2021.1990299

Hu J, Zhang H, Irfan M (2023) How does digital infrastructure construction affect low-carbon development? A multidimensional interpretation of evidence from China. J. Clean. Prod. 396(9):136467. https://doi.org/10.1016/j.jclepro.2023.136467

Hu TF, Guo RS, Chen C (2022) Understanding mobile payment adaption with the integrated model of UTAUT and MOA model. 2022 Portland International Conference on Management of Engineering and Technology (PICMET), Portland, OR, USA

Hutchins N, Allen A, Curran M, Kannis-Dymand L (2021) Social anxiety and online social interaction. Aust. Psychologist 56(2):142–153. https://doi.org/10.1080/00050067.2021.1890977

Iancu I, Iancu B (2020) Designing mobile technology for elderly. A theoretical overview. Technol. Forecast. Soc. Change 155(9):119977. https://doi.org/10.1016/j.techfore.2020.119977

Jakonen, OI (2024). Smart cities, virtual futures? - Interests of urban actors in mediating digital technology and urban space in Tallinn, Estonia. Urban Studies , 17. https://doi.org/10.1177/00420980241245871

Ji TT, Chen JH, Wei HH, Su YC (2021) Towards people-centric smart city development: investigating the citizens’ preferences and perceptions about smart-city services in Taiwan. Sustain. Cities Soc. 67(14):102691. https://doi.org/10.1016/j.scs.2020.102691

Jöreskog KG (1971) Simultaneous factor analysis in several populations. Psychometrika 36(4):409–426. https://doi.org/10.1007/BF02291366

Joshi Y, Uniyal DP, Sangroya D (2021) Investigating consumers’ green purchase intention: examining the role of economic value, emotional value and perceived marketplace influence. J. Clean. Prod. 328(8):129638. https://doi.org/10.1016/j.jclepro.2021.129638

Kadylak T, Cotten SR (2020) United States older adults’ willingness to use emerging technologies. Inf. Commun. Soc. 23(5):736–750. https://doi.org/10.1080/1369118x.2020.1713848

Kaihlanen AM, Virtanen L, Buchert U, Safarov N, Valkonen P, Hietapakka L, Hörhammer I, Kujala S, Kouvonen A, Heponiemi T (2022) Towards digital health equity-a qualitative study of the challenges experienced by vulnerable groups in using digital health services in the COVID-19 era. BMC Health Services Research 22(1):188. https://doi.org/10.1186/s12913-022-07584-4

Khan HH, Malik MN, Zafar R, Goni FA, Chofreh AG, Klemes JJ, Alotaibi Y (2020) Challenges for sustainable smart city development: a conceptual framework. Sustain. Dev. 28(5):1507–1518. https://doi.org/10.1002/sd.2090

Kim S, Park H (2013) Effects of various characteristics of social commerce (s-commerce) on consumers’ trust and trust performance. Int. J. Inf. Manag. 33(2):318–332. https://doi.org/10.1016/j.ijinfomgt.2012.11.006

Leary RB, Vann RJ, Mittelstaedt JD (2019) Perceived marketplace influence and consumer ethical action. J. Consum. Aff. 53(3):1117–1145. https://doi.org/10.1111/joca.12220

Leary RB, Vann RJ, Mittelstaedt JD, Murphy PE, Sherry JF (2014) Changing the marketplace one behavior at a time: perceived marketplace influence and sustainable consumption. J. Bus. Res. 67(9):1953–1958. https://doi.org/10.1016/j.jbusres.2013.11.004

Lee DD, Arya LA, Andy UU, Sammel MD, Harvie HS (2019) Willingness of women with pelvic floor disorders to use mobile technology to communicate with their health care providers. Female Pelvic Med. Reconstructive Surg. 25(2):134–138. https://doi.org/10.1097/spv.0000000000000668

Lee SW, Sung HJ, Jeon HM (2019) Determinants of continuous intention on food delivery apps: extending UTAUT2 with information quality. Sustainability 11(11):3141. https://doi.org/10.3390/su11113141 . 15

Li Mo QZ, Bai BY (2023) Height dissatisfaction and loneliness among adolescents: the chain mediating role of social anxiety and social support. Curr. Psychol. 42(31):27296–27304. https://doi.org/10.1007/s12144-022-03855-9

Liébana-Cabanillas F, Japutra A, Molinillo S, Singh N, Sinha N (2020) Assessment of mobile technology use in the emerging market: analyzing intention to use m-payment services in India. Telecommun. Policy 44(9):102009. https://doi.org/10.1016/j.telpol.2020.102009 . 17

Liu HD, Zhao HF (2022) Upgrading models, evolutionary mechanisms and vertical cases of service-oriented manufacturing in SVC leading enterprises: product-development and service-innovation for industry 4.0. Humanities Soc. Sci. Commun. 9(1):387. https://doi.org/10.1057/s41599-022-01409-9 . 24

Liu ZL, Wang Y, Xu Q, Yan T, Iop (2017) Study on smart city construction of Jiujiang based on IOT technology. 3rd International Conference on Advances in Energy, Environment and Chemical Engineering (AEECE), Chengdu, CHINA

Magee WJ, Eaton WW, Wittchen H-U, McGonagle KA, Kessler RC (1996) Agoraphobia, simple phobia, and social phobia in the National Comorbidity Survey. Arch. Gen. Psychiatry 53(2):159–168

Article   CAS   PubMed   Google Scholar  

Martins R, Oliveira T, Thomas M, Tomás S (2019) Firms’ continuance intention on SaaS use - an empirical study. Inf. Technol. People 32(1):189–216. https://doi.org/10.1108/itp-01-2018-0027

Miron AM, Brehm JW (2006) Reactance theory - 40 Years later. Z. Fur Sozialpsychologie 37(1):9–18. https://doi.org/10.1024/0044-3514.37.1.9

Mogaji E, Balakrishnan J, Nwoba AC, Nguyen NP (2021) Emerging-market consumers’ interactions with banking chatbots. Telematics and Informatics 65:101711. https://doi.org/10.1016/j.tele.2021.101711

Mogaji E, Bosah G, Nguyen NP (2023) Transport and mobility decisions of consumers with disabilities. J. Consum. Behav. 22(2):422–438. https://doi.org/10.1002/cb.2089

Mogaji E, Nguyen NP (2021) Transportation satisfaction of disabled passengers: evidence from a developing country. Transportation Res. Part D.-Transp. Environ. 98:102982. https://doi.org/10.1016/j.trd.2021.102982

Mora L, Gerli P, Ardito L, Petruzzelli AM (2023) Smart city governance from an innovation management perspective: theoretical framing, review of current practices, and future research agenda. Technovation 123:102717. https://doi.org/10.1016/j.technovation.2023.102717

Mulvale G, Moll S, Miatello A, Robert G, Larkin M, Palmer VJ, Powell A, Gable C, Girling M (2019) Codesigning health and other public services with vulnerable and disadvantaged populations: insights from an international collaboration. Health Expectations 22(3):284–297. https://doi.org/10.1111/hex.12864

Narzt W, Mayerhofer S, Weichselbaum O, Pomberger G, Tarkus A, Schumann M (2016) Designing and evaluating barrier-free travel assistance services. 3rd International Conference on HCI in Business, Government, and Organizations - Information Systems (HCIBGO) Held as Part of 18th International Conference on Human-Computer Interaction (HCI International), Toronto, CANADA

Nguyen GD, Dao THT (2024) Factors influencing continuance intention to use mobile banking: an extended expectation-confirmation model with moderating role of trust. Humanities Soc. Sci. Commun. 11(1):276. https://doi.org/10.1057/s41599-024-02778-z

Nicolas C, Kim J, Chi S (2020) Quantifying the dynamic effects of smart city development enablers using structural equation modeling. Sustain. Cities Soc. 53:101916. https://doi.org/10.1016/j.scs.2019.101916

Paköz MZ, Sözer C, Dogan A (2022) Changing perceptions and usage of public and pseudo-public spaces in the post-pandemic city: the case of Istanbul. Urban Des. Int. 27(1):64–79. https://doi.org/10.1057/s41289-020-00147-1

Perez AJ, Siddiqui F, Zeadally S, Lane D (2023) A review of IoT systems to enable independence for the elderly and disabled individuals. Internet Things 21:100653. https://doi.org/10.1016/j.iot.2022.100653

Purohit S, Arora R, Paul J (2022) The bright side of online consumer behavior: continuance intention for mobile payments. J. Consum. Behav. 21(3):523–542. https://doi.org/10.1002/cb.2017

Ringle CM, Sarstedt M, Straub DW (2012) Editor’s Comments: A Critical Look at the Use of PLS-SEM in “MIS Quarterly”. MIS Q. 36(1):III–XIV

Roubroeks MAJ, Ham JRC, Midden CJH (2010) The dominant robot: threatening robots cause psychological reactance, especially when they have incongruent goals. 5th International Conference on Persuasive Technology, Copenhagen, DENMARK

Saparudin M, Rahayu A, Hurriyati R, Sultan MA, Ramdan AM, Ieee (2020) Consumers’ continuance intention use of mobile banking in Jakarta: extending UTAUT models with trust. 5th International Conference on Information Management and Technology (ICIMTech), Bandung, Indonesia

Scavarelli A, Arya A, Teather RJ (2021) Virtual reality and augmented reality in social learning spaces: a literature review. Virtual Real. 25(1):257–277. https://doi.org/10.1007/s10055-020-00444-8

Schneider AB, Leonard B (2022) From anxiety to control: mask-wearing, perceived marketplace influence, and emotional well-being during the COVID-19 pandemic. J. Consum. Aff. 56(1):97–119. https://doi.org/10.1111/joca.12412

Schou J, Pors AS (2019) Digital by default? A qualitative study of exclusion in digitalised welfare. Soc. Policy Adm. 53(3):464–477. https://doi.org/10.1111/spol.12470

Schultz LT, Heimberg RG (2008) Attentional focus in social anxiety disorder: potential for interactive processes. Clin. Psychol. Rev. 28(7):1206–1221. https://doi.org/10.1016/j.cpr.2008.04.003

Shibusawa H (2000) Cyberspace and physical space in an urban economy. Pap. Regional Sci. 79(3):253–270. https://doi.org/10.1007/pl00013610

Susanto A, Mahadika PR, Subiyakto A, Nuryasin, Ieee (2018) Analysis of electronic ticketing system acceptance using an extended unified theory of acceptance and use of technology (UTAUT). 6th International Conference on Cyber and IT Service Management (CITSM), Parapat, Indonesia

Susilawati C, Wong J, Chikolwa B (2010) Public participation, values and interests in the procurement of infrastructure projects in Australia: a review and future research direction. 2010 International Conference on Construction and Real Estate Management, Brisbane, Australia

Tam C, Santos D, Oliveira T (2020) Exploring the influential factors of continuance intention to use mobile Apps: extending the expectation confirmation model. Inf. Syst. Front. 22(1):243–257. https://doi.org/10.1007/s10796-018-9864-5

Teo T, Zhou MM, Fan ACW, Huang F (2019) Factors that influence university students’ intention to use Moodle: a study in Macau. EtrD-Educ. Technol. Res. Dev. 67(3):749–766. https://doi.org/10.1007/s11423-019-09650-x

Thapa S, Nielsen JB, Aldahmash AM, Qadri FR, Leppin A (2021) Willingness to use digital health tools in patient care among health care professionals and students at a university hospital in Saudi Arabia: quantitative cross-sectional survey. JMIR Med. Educ. 7(1):e18590. https://doi.org/10.2196/18590

Tian X, Solomon DH, Brisini KS (2020) How the comforting process fails: psychological reactance to support messages. J. Commun. 70(1):13–34. https://doi.org/10.1093/joc/jqz040

Troisi O, Fenza G, Grimaldi M, Loia F (2022) Covid-19 sentiments in smart cities: the role of technology anxiety before and during the pandemic. Computers in Human Behavior 126:106986. https://doi.org/10.1016/j.chb.2021.106986

Venkatesh V, Brown SA (2001) A longitudinal investigation of personal computers in homes: adoption determinants and emerging challenges. MIS Q. 25(1):71–102. https://doi.org/10.2307/3250959

Venkatesh V, Morris MG, Davis GB, Davis FD (2003) User acceptance of information technology: toward a unified view. MIS Q. 27(3):425–478. https://doi.org/10.2307/30036540

Vinnikova A, Lu LD, Wei JC, Fang GB, Yan J (2020) The Use of smartphone fitness applications: the role of self-efficacy and self-regulation. International Journal of Environmental Research and Public Health 17(20):7639. https://doi.org/10.3390/ijerph17207639

Wang BA, Zhang R, Wang Y (2021) Mechanism influencing older people’s willingness to use intelligent aged-care products. Healthcare 9(7):864. https://doi.org/10.3390/healthcare9070864

Wang CHJ, Steinfeld E, Maisel JL, Kang B (2021) Is your smart city inclusive? Evaluating proposals from the US department of transportation’s smart city challenge. Sustainable Cities and Society 74:103148. https://doi.org/10.1016/j.scs.2021.103148

Wang XY (2007) Mutually augmented virtual environments for architecural design and collaboration. 12th Computer-Aided Architectural Design Futures Conference, Sydney, Australia

Werner P, Karnieli E (2003) A model of the willingness to use telemedicine for routine and specialized care. J. Telemed. Telecare 9(5):264–272. https://doi.org/10.1258/135763303769211274

Yadav J, Saini AK, Yadav AK (2019) Measuring citizens engagement in e-Government projects - Indian perspective. J. Stat. Manag. Syst. 22(2):327–346. https://doi.org/10.1080/09720510.2019.1580908

Yang CC, Liu C, Wang YS (2023) The acceptance and use of smartphones among older adults: differences in UTAUT determinants before and after training. Libr. Hi Tech. 41(5):1357–1375. https://doi.org/10.1108/lht-12-2021-0432

Yang K, Forney JC (2013) The moderating role of consumer technology anxiety in mobile shopping adoption: differential effects of facilitating conditions and social influences. J. Electron. Commer. Res. 14(4):334–347

Yeung HL, Hao P (2024) Telecommuting amid Covid-19: the Governmobility of work-from-home employees in Hong Kong. Cities 148:104873. https://doi.org/10.1016/j.cities.2024.104873

Zander V, Gustafsson C, Stridsberg SL, Borg J (2023) Implementation of welfare technology: a systematic review of barriers and facilitators. Disabil. Rehabilitation-Assistive Technol. 18(6):913–928. https://doi.org/10.1080/17483107.2021.1938707

Zeebaree M, Agoyi M, Agel M (2022) Sustainable adoption of e-government from the UTAUT perspective. Sustainability 14(9):5370. https://doi.org/10.3390/su14095370

Zhang YX, Liu HX, Kang SC, Al-Hussein M (2020) Virtual reality applications for the built environment: Research trends and opportunities. Autom. Constr. 118:103311. https://doi.org/10.1016/j.autcon.2020.103311

Zhong YP, Oh S, Moon HC (2021) Service transformation under industry 4.0: investigating acceptance of facial recognition payment through an extended technology acceptance model. Technology in Society 64:101515. https://doi.org/10.1016/j.techsoc.2020.101515

Zhu DH, Deng ZZ (2021) Effect of social anxiety on the adoption of robotic training partner. Cyberpsychology Behav. Soc. Netw. 24(5):343–348. https://doi.org/10.1089/cyber.2020.0179

Download references

Acknowledgements

This research was supported by the National Social Science Foundation of China, grant number 22BGJ037; the Fundamental Research Funds for the Provincial Universities of Zhejiang, grant number GB202301004; and the Zhejiang Province University Students Science and Technology Innovation Activity Program, grant numbers 2023R403013, 2023R403010 & 2023R403086.

Author information

These authors contributed equally: Chengxiang Chu, Zhenyang Shen, Hanyi Xu.

Authors and Affiliations

School of Management, Zhejiang University of Technology, Hangzhou, China

Chengxiang Chu, Zhenyang Shen, Qizhi Wei & Cong Cao

Law School, Zhejiang University of Technology, Hangzhou, China

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualisation: C.C., CX.C. and ZY.S.; Methodology: CX.C. and HY.X.; Validation: ZY.S. and QZ.W.; Formal analysis: HY.X.; Investigation: CX.C., ZY.S. and HY.X.; Resources: C.C.; Data curation: CX.C. and HY.X.; Writing–original draft preparation: CX.C, ZY.S., HY.X. and QZ.W.; Writing–review & editing: CX.C and C.C.; Visualisation: ZY.S. and HY.X.; Supervision: C.C.; Funding acquisition: C.C., CX.C. and ZY.S.; all authors approved the final manuscript to be submitted.

Corresponding author

Correspondence to Cong Cao .

Ethics declarations

Ethical approval.

Ethical approval for the involvement of human subjects in this study was granted by Institutional Review Board of School of Management, Zhejiang University of Technology, China, Reference number CC-2023-1-0008-0005-SOM-ZJUT.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A. Measurement items

Factors

Items

Source

Performance Expectancy

1. Use of ‘accessibility infrastructure’ helps me to handle affairs quickly and efficiently.

Ali et al. ( )

2. ‘Accessibility infrastructure’ ensures the accessibility and availability of facilities for handling my affairs.

3. ‘Accessibility infrastructure’ save time in handling my affairs.

4. ‘Accessibility infrastructure’ saves effort in handling my affairs.

Psychological Reactance

1. The existence or sudden intervention of ‘accessibility infrastructure’ makes me feel angry.

Tian et al. ( )

2. The existence or sudden intervention of ‘accessibility infrastructure’ makes me feel irritated.

3. I criticised its existence while using the ‘accessibility infrastructure’.

4. When using the ‘accessibility infrastructure’, I preferred the original state.

Perceived Institutional Support

1. My country helps me use the ‘accessibility infrastructure’.

Almaiah et al. ( ); Garone et al. ( )

2. Public institutions that are important to me think that I should use the ‘accessibility infrastructure’.

3. I believe that my country supports the use of the ‘accessibility infrastructure’.

Perceived Marketplace Influence

1. I believe that many people in my country use the ‘accessibility infrastructure’.

Almaiah et al. ( ); Garone et al. ( )

2. I believe that many people in my country desire to use the ‘accessibility infrastructure’.

3. I believe that many people in my country approve of using the ‘accessibility infrastructure’.

Effort Expectancy

1. My interactions with the ‘accessibility infrastructure’ are clear and understandable.

Venkatesh et al. ( )

2. It is easy for me to become skilful in using the ‘accessibility infrastructure’.

3. Learning to operate the ‘accessibility infrastructure’ is easy for me.

Facilitating Conditions

1. I have the resources necessary to use the ‘accessibility infrastructure’.

Venkatesh et al. ( )

2. I have the knowledge necessary to use the ‘accessibility infrastructure’.

3. The ‘accessibility infrastructure’ is not compatible with other infrastructure I use.

4. A specific person (or group) is available to assist me with ‘accessibility infrastructure’ difficulties.

Social Interaction Anxiety

1. I feel tense if talk about myself or my feelings.

Fergus et al. ( )

2. I tense up if meet an acquaintance in the street.

3. I feel tense if I am alone with one other person.

4. I feel nervous mixing with people I don’t know well.

5. I worry about being ignored when in a group.

6. I feel tense mixing in a group.

Intention to Use

1. If I had access to the ‘accessibility infrastructure’, I would intend to use it.

Teo et al. ( )

2. If I had access to the ‘accessibility infrastructure’ in the coming months, I believe that I would use it rather than taking other measures.

3. I expect that I will use the ‘accessibility infrastructure’ in my daily life in the future.

4. I plan to use the ‘accessibility infrastructure’ in my daily life in the future.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Chu, C., Shen, Z., Xu, H. et al. How to avoid sinking in swamp: exploring the intentions of digitally disadvantaged groups to use a new public infrastructure that combines physical and virtual spaces. Humanit Soc Sci Commun 11 , 1135 (2024). https://doi.org/10.1057/s41599-024-03684-0

Download citation

Received : 28 October 2023

Accepted : 29 August 2024

Published : 04 September 2024

DOI : https://doi.org/10.1057/s41599-024-03684-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research studies questionnaires

research studies questionnaires

Researchers Develop Mechanism that Predicts Severity of Aggressive Form of Breast Cancer

Read Time: 3 minutes

Cindy Matsen, MD

Scientists at Huntsman Cancer Institute at the University of Utah (the U), the National Cancer Institute-designated cancer center for the Mountain West, have made a significant breakthrough in predicting the prognosis of triple-negative breast cancer (TNBC), a particularly aggressive disease. 

Their research, published in JCO Precision Oncology as part of the TOWARDS study , has led to the development of a new mechanism that accurately forecasts the aggressiveness of TNBC. This advancement could revolutionize the way doctors treat TNBC, allowing them to identify higher-risk patients and tailor precise treatments.

Currently, TNBC lacks reliable methods to predict recurrence after treatments like chemotherapy and surgery. Unlike other breast cancers, TNBC is challenging to treat because its tumor cells lack estrogen receptors, progesterone receptors, and high levels of HER2/neu protein, according to experts at the National Institutes of Health . This often results in a higher likelihood of relapse after treatment.

Researchers used a patient-derived xenograft (PDX) model, where biopsies of tumors from patients were implanted into mice to grow human tumors. Alana Welm, PhD , senior author of the study, senior director of basic science at Huntsman Cancer Institute, and professor of oncological sciences at the U, highlights the significance of this method noting that it allows for an early and accurate assessment of the cancer’s aggressiveness.

research studies questionnaires

Alana Welm, PhD

View Profile

research studies questionnaires

Cindy Matsen, MD

research studies questionnaires

Christos Vaklavas, MD

Cindy Matsen, MD , co-first author of the study, leader of the Breast and Gynecologic Disease Center at Huntsman Cancer Institute and associate professor in the Department of Surgery at the U, emphasizes the direct impact this research could have on patient care. She describes the study as highly relevant to addressing a major challenge in breast cancer treatment, with the potential of creating more personalized treatment plans for patients with recurrent TNBC.

The study’s mechanism was more accurate than existing methods in predicting whether TNBC will recur. Welm notes, “By implanting a biopsy of the tumor into a PDX, we can discover how aggressive the cancer is. We hope to extend our new findings to improve the current standard tests used to predict whether the patients’ cancer will recur.”

“This study addresses a very pressing problem in the clinic,” says Christos Vaklavas, MD , co-first author of the study, head of the breast cancer clinical program at Huntsman Cancer Institute, and associate professor of internal medicine at the School of Medicine at the U. “PDX models help us not only predict with greater accuracy who will relapse and who will not, but also to treat recurrences with greater precision.”

Matsen also underscored the practical benefits of the study. In the second phase, now underway as a clinical trial, scientists are testing specific drugs on PDX models. If these therapies prove effective, results will be shared with physicians, providing them with valuable insights on how to guide treatment decisions.

The study’s results are crucial: if a tumor grows in the PDX model, it often indicates a highly aggressive cancer, which is significantly harder to treat. Matsen stressed the urgency of improving treatment strategies, noting the devastating impact of the cancer recurrence.

“This study gives us the opportunity to provide hope and to save more lives,” Matsen says.

Other Huntsman Cancer Institute scientists who contributed to the study include Kenneth Boucher, PhD , and Bryan Welm, PhD . This study was supported by the Department of Defense Breast Cancer Research Program, the National Institutes of Health National Cancer Institute, including P30 CA02014, and Huntsman Cancer Foundation .

Media Contact

Heather Simonsen Public Affairs Senior Manager Huntsman Cancer Institute 801 581-3194 [email protected]

About Huntsman Cancer Institute at the University of Utah

Huntsman Cancer Institute at the University of Utah is the National Cancer Institute-designated Comprehensive Cancer Center for Utah, Idaho, Montana, Nevada, and Wyoming. With a legacy of innovative cancer research, groundbreaking discoveries, and world-class patient care, we are transforming the way cancer is understood, prevented, diagnosed, treated, and survived. Huntsman Cancer Institute focuses on delivering the most advanced cancer healing and prevention through scientific breakthroughs and cutting-edge technology to advance cancer treatments of the future beyond the standard of care today. We have more than 300 open clinical trials and 250 research teams studying cancer. More genes for inherited cancers have been discovered at Huntsman Cancer Institute than at any other cancer center. Our scientists are world-renowned for understanding how cancer begins and using that knowledge to develop innovative approaches to treat each patient’s unique disease. Huntsman Cancer Institute was founded by Jon M. and Karen Huntsman.

Resources for Media

Cancer touches all of us..

  • Breast Cancer
  • Cancer Research
  • Open access
  • Published: 02 September 2024

Leadership support and satisfaction of healthcare professionals in China’s leading hospitals: a cross-sectional study

  • Jinhong Zhao 1 , 2 ,
  • Tingfang Liu 2 &
  • Yuanli Liu 2  

BMC Health Services Research volume  24 , Article number:  1016 ( 2024 ) Cite this article

Metrics details

Healthcare professionals’ job satisfaction is a critical indicator of healthcare performance, pivotal in addressing challenges such as hospital quality outcomes, patient satisfaction, and staff retention rates. Existing evidence underscores the significant influence of healthcare leadership on job satisfaction. Our study aims to assess the impact of leadership support on the satisfaction of healthcare professionals, including physicians, nurses, and administrative staff, in China’s leading hospitals.

A cross-sectional survey study was conducted on healthcare professionals in three leading hospitals in China from July to December 2021. These hospitals represent three regions in China with varying levels of social and economic development, one in the eastern region, one in the central region, and the third in the western region. Within each hospital, we employed a convenience sampling method to conduct a questionnaire survey involving 487 healthcare professionals. We assessed perceived leadership support across five dimensions: resource support, environmental support, decision support, research support, and innovation encouragement. Simultaneously, we measured satisfaction using the MSQ among healthcare professionals.

The overall satisfaction rate among surveyed healthcare professionals was 74.33%. Our study revealed significant support from senior leadership in hospitals for encouraging research (96.92%), inspiring innovation (96.30%), and fostering a positive work environment (93.63%). However, lower levels of support were perceived in decision-making (81.72%) and resource allocation (80.08%). Using binary logistic regression with satisfaction as the dependent variable and healthcare professionals’ perceived leadership support, hospital origin, job role, department, gender, age, education level, and professional designation as independent variables, the results indicated that support in resource provision (OR: 4.312, 95% CI: 2.412  ∼  7.710) and environmental facilitation (OR: 4.052, 95% CI: 1.134  ∼  14.471) significantly enhances healthcare personnel satisfaction.

The findings underscore the critical role of leadership support in enhancing job satisfaction among healthcare professionals. For hospital administrators and policymakers, the study highlights the need to focus on three key dimensions: providing adequate resources, creating a supportive environment, and involving healthcare professionals in decision-making processes.

Peer Review reports

Introduction

In the era of accelerated globalization, the investigation of global leadership has assumed heightened significance [ 1 ]. Leadership, as a dynamic and evolving process, holds the potential to cultivate both the personal and professional growth of followers [ 2 ]. Effective healthcare leadership can enhance medical service quality, patient safety, and staff job satisfaction through skill development, vision establishment, and clear direction-setting [ 3 , 4 , 5 ]. Moreover, leadership support can effectively enhance staff well-being and work efficiency [ 6 , 7 ]. For example, Mendes et al. found that the quality of healthcare is significantly influenced by four dimensions of leadership: communication, recognition, development, and innovation [ 8 ]. Additionally, Shanafelt et al. discovered that leaders can effectively reduce employee burnout and subsequently improve the quality of medical services by formulating and implementing targeted work interventions and motivating employees [ 9 ].

Job satisfaction among healthcare professionals is a crucial indicator of healthcare performance, playing a vital role in addressing challenges related to hospital quality outcomes, patient satisfaction, and nurse retention rates [ 10 , 11 , 12 , 13 ]. Researchers from different national backgrounds have conducted studies on the job satisfaction of healthcare workers across various disciplines. For example, Balasubramanian et al. examined the satisfaction of immigrant dentists in Australia [ 14 ], Mascari et al. studied physicians and hospital researchers in the United States [ 15 ], and Rosta et al. investigated the satisfaction of doctors in Norway [ 12 ]. Research has demonstrated that characteristics of the work environment, balanced workloads, relationships with colleagues, career opportunities, and leadership support all influence job satisfaction [ 16 ]. Several instruments are commonly used to measure job satisfaction, each relevant depending on the context and discipline. For instance, the Job Descriptive Index (JDI) focuses on different facets of job satisfaction such as work, pay, promotion, supervision, and co-workers [ 17 ]. The Job Satisfaction Survey (JSS) covers similar dimensions and is particularly useful in public sector organizations due to its comprehensive nature and ease of use [ 18 ]. The Minnesota Satisfaction Questionnaire (MSQ) is a comprehensive tool that assesses employee satisfaction across multiple dimensions including intrinsic and extrinsic satisfaction, and is commonly used for evaluating job satisfaction in the healthcare field [ 19 ].

Recent studies have linked leadership to healthcare professionals’ job satisfaction, highlighting the pivotal role of leadership in guiding, coordinating, and motivating employees [ 5 ]. For instance, the Mayo Clinic found that leadership from immediate supervisors could alleviate burnout and increase job satisfaction [ 20 ]. Choi’s research indicated that leadership empowerment significantly enhances nursing staff’s job satisfaction [ 21 ]. Additionally, Liu discovered that the support provided by hospital senior leadership is closely associated with employee satisfaction [ 22 ].

In China, while leadership research has gained some traction in areas such as business and education, it remains relatively scarce within healthcare institutions. Existing studies primarily focus on the nursing sector, and comprehensive assessments of leadership at the leading public hospitals (top 10% of Chinese hospitals) have not been extensively conducted [ 23 , 24 ]. Research on leadership and healthcare professionals’ satisfaction often relies on single indicators to measure job satisfaction, such as overall job satisfaction or specific aspects like compensation satisfaction and burnout levels [ 25 ]. This narrow focus may fail to fully capture the multidimensional nature of employee satisfaction, which includes aspects such as workload, ability utilization, sense of achievement, initiative, training and self-development, and interpersonal communication [ 26 ]. Additionally, most existing studies focus on the job satisfaction of nurses or physicians in isolation, lacking comparative research across different groups within healthcare institutions, such as doctors, nurses, and administrative personnel [ 27 , 28 , 29 ].

Therefore, this study utilized the MSQ to conduct a thorough assessment of employee satisfaction and assess the impact of leadership support on the satisfaction of healthcare personnel in China’s leading public hospitals. Through this research, we aim to enhance the core competitiveness of hospitals and provide valuable data to support leadership assessments in developing countries’ healthcare institutions. Moreover, this study seeks to contribute to the broader international understanding of effective leadership practices in China’s leading public hospitals, with implications for global health management strategies.

Study design and participants

From July to December 2021, a cross-sectional survey study was conducted on healthcare professionals in China’s 3 leading hospitals. The 3 leading hospitals represent three regions in China with different levels of social and economic development, one in the eastern, one in the central, and one in the western. In each hospital, a convenience sampling method was used to conduct a questionnaire survey among physicians, nurses, and administrative staff.

Criteria for inclusion of healthcare professionals: (1) employed at the hospital for at least 1 year or more; (2) formal employees of the hospital (full-time staff); (3) possessing cognitive clarity and the ability to independently understand and respond to electronic questionnaires, as assessed by their leaders. Exclusion criteria: (1) diagnosed with mental health disorders that impair their ability to participate, as identified by the hospital’s mental health professionals; (2) unable to communicate effectively due to severe language barriers, hearing impairments, or other communication disorders, as determined by their direct supervisors or relevant medical evaluations; (3) visiting scholars, interns, or graduate students currently enrolled in a degree program.

Instrument development

Leadership support.

In reference to the Malcolm Baldrige National Quality Award (MBNQA) framework and Supporting Relationship Theory [ 6 , 30 , 31 ], we determined the survey scale after three expert discussions involving 5–7 individuals. These experts included personnel from health administrative departments, leading public hospital leaders, middle management, and researchers specializing in hospital management. Their collective expertise ensured that the survey comprehensively assessed leadership support within hospitals from the perspective of healthcare personnel. The Leadership Support Scale consists of 5 items: Environmental Support: ‘My leaders provide a work environment that helps me perform my job,’ Resource Support: ‘My leaders provide the resources needed to improve my work,’ Decision Support: ‘My leaders support my decisions to satisfy patients,’ Research Support: ‘My leaders support my application for scientific research projects,’ and Innovation Encouragement: ‘My leaders encourage me to innovate actively and think about problems in new ways‘ (Supplementary material). All questionnaire items are rated on a 5-point Likert scale, ranging from 1 = Strongly Disagree to 5 = Strongly Agree. The Cronbach’s alpha coefficient for the 5-item scale is 0.753.

Job satisfaction

The measurement of job satisfaction was carried out using the Minnesota Satisfaction Questionnaire (MSQ) [ 32 , 33 ], which has been widely used and has been shown by scholars to have good reliability and validity in China [ 34 , 35 ]. The questionnaire consists of 20 items that measure healthcare personnel’s satisfaction with various aspects of their job, including individual job load, ability utilization, achievement, initiative, hospital training and self-development, authority, hospital policies and practices, compensation, teamwork, creativity, independence, moral standards, hospital rewards and punishments, personal responsibility, job security, social service contribution, social status, employee relations and communication, and hospital working conditions and environment. Responses to these items were balanced and rated on a scale from 1 to 5, with 1 = Very Dissatisfied, 2 = Dissatisfied, 3 = Neither Dissatisfied nor Satisfied, 4 = Satisfied, and 5 = Very Satisfied. Scores range from 20 to 100, with higher scores indicating higher satisfaction. In this study, a comprehensive assessment of healthcare personnel’s job satisfaction was made using a score of 80 and above [ 32 ], where a score of ≥ 80 was considered satisfied, and below 80 was considered dissatisfied. The Cronbach’s alpha coefficient for the questionnaire in this survey was 0.983.

Investigation process

The survey was administered through an online platform “Wenjuanxing”, and distributed by department heads to healthcare professionals within their respective departments. The selection of departments and potential participants followed a structured process: (1) Potential participants were identified based on the inclusion criteria, which were communicated to the department heads. (2) Department heads received a digital link to the survey, which they forwarded to eligible staff members via email or internal communication platforms. (3) The informed consent form was integrated into the survey link, detailing the research objectives, ensuring anonymity, and emphasizing voluntary participation. At the beginning of the online survey, participants were asked if they agreed to participate. Those who consented continued with the survey, while those who did not agree were directed to end the survey immediately.

According to Kendall’s experience and methodology, the sample size can be 5–10 times the number of independent variables (40 items) [ 36 , 37 ]. Our sample size is ten times the number of independent variables. Considering potentially disqualified questionnaires, the sample size was increased by 10%, resulting in a minimum total sample size of 460. Therefore, we distributed 500 survey questionnaires.

Data analysis

We summarized the sociodemographic characteristics of healthcare personnel survey samples using descriptive statistical methods. For all variables, we calculated the frequencies and percentages of categorical variables. Different sociodemographic characteristics in relation to healthcare personnel’s perception of leadership support and satisfaction were analyzed using the Pearson χ² test. We employed a binary logistic regression model to estimate the risk ratio of healthcare personnel satisfaction under different levels of leadership support. Estimates from three sequentially adjusted models were reported to transparently demonstrate the impact of various adjustments: (1) unadjusted; (2) adjusted for hospital of origin; (3) adjusted for hospital of origin, gender, age, education level, job type, and department. For the binary logistic regression model, we employed a backward stepwise regression approach, with inclusion at P  < 0.05 and exclusion at P  > 0.10 criteria. In all analyses, a two-tailed p -value of < 0.05 was considered significant, and all analyses were conducted using SPSS 26.0 (IBM Corp., Armonk, NY, USA).

Demographic characteristics and job satisfaction

This study recruited a total of 500 healthcare personnel from hospitals to participate in the survey, with 487 valid questionnaires collected, resulting in an effective response rate of 97.4%. The majority of participants were female (77.21%), with ages concentrated between 30 and 49 years old (73.71%). The predominant job titles were mid-level (45.17%) and junior-level (27.31%), and educational backgrounds were mostly at the undergraduate (45.17%) and graduate (48.25%) levels. The marital status of most participants was married (79.88%), and their primary departments were surgery (38.19%) and internal medicine (24.85%). The overall satisfaction rate among the sampled healthcare personnel was 74.33%. Differences in satisfaction were statistically significant among healthcare personnel of different genders, ages, educational levels, job types, hospitals, and departments ( P  < 0.05). Table  1 displays the participants’ demographic characteristics and job satisfaction.

By analyzed the satisfaction level of healthcare personnel in different dimensions, the results show that “Social service” (94.3%) and “Moral values” (92.0%) have the highest satisfaction. “Activity” (66.8%) and “Compensation” (71.9%) were the least satisfied. Table  2 shows participants’ job satisfaction in different dimensions.

Perception of different types of leadership support among healthcare professionals

Overall, surveyed healthcare personnel perceived significant levels of support from hospital leadership for research encouragement (96.92%), innovation inspiration (96.30%), and the work environment (93.63%), while perceiving lower levels of support for decision-making (81.72%) and resource allocation (80.08%). Female healthcare personnel perceived significantly higher levels of resource support compared to males ( P  < 0.05). Healthcare personnel in the 30–39 age group perceived significantly higher levels of resource, environmental, and research support compared to other age groups ( P  < 0.05). Healthcare personnel with senior-level job titles perceived significantly lower levels of resource and decision-making support compared to associate-level and lower job titles, and those with doctoral degrees perceived significantly lower levels of resource support compared to other educational backgrounds ( P  < 0.05).

Clinical doctors perceived significantly lower levels of resource and environmental support compared to administrative personnel and clinical nurses, while administrative personnel perceived significantly lower levels of decision-making support compared to clinical doctors and clinical nurses ( P  < 0.05). Among healthcare personnel in internal medicine, perceptions of resource, environmental, research, and innovation support were significantly lower than those in surgery, administration, and other departments, whereas perceptions of decision-making support in administrative departments were significantly lower than in internal medicine, surgery, and other departments ( P  < 0.05). Figure  1 displays the perception of leadership support among healthcare personnel with different demographic characteristics.

figure 1

Perception of leadership support among healthcare professionals with different demographic characteristics in China’s leading public hospitals (* indicates P  < 0.05, ** indicates P  < 0.01, and *** indicates P  < 0.001.)

The impact of leadership support on job satisfaction among healthcare professionals

The study results indicate that healthcare personnel who perceive that their leaders provide sufficient resource, environmental, and decision-making support have significantly higher job satisfaction than those who feel that leaders have not provided enough support ( P  < 0.05). Similarly, healthcare personnel who perceive that their leaders provide sufficient research and innovation inspiration have significantly higher job satisfaction than those who believe leaders have not provided enough inspiration ( P  < 0.05). Table  3 displays the univariate analysis of leadership support on healthcare professional satisfaction.

With healthcare personnel satisfaction as the dependent variable, leadership resource support, environmental support, decision-making support, research support, and innovation inspiration were included in the binary logistic regression model. After adjusting for hospital, gender, age, education level, job type, and department, leadership’s increased resource support (OR: 4.312, 95% CI: 2.412  ∼  7.710) and environmental support (OR: 4.052, 95% CI: 1.134  ∼  14.471) were found to enhance the satisfaction levels of healthcare personnel significantly. Additionally, healthcare professionals in Hospital 2 (OR: 3.654, 95% CI: 1.796 to 7.435) and Hospital 3 (OR: 2.354, 95% CI: 1.099 to 5.038) exhibited higher levels of satisfaction compared to those in Hospital 1. Table 4 displays the binary Logistic regression analysis of leadership support on satisfaction among healthcare professionals.

This study aimed to determine the impact of support from hospital senior leadership on the job satisfaction of healthcare personnel and to explore the effects of demographic and different types of support on the job satisfaction of healthcare personnel in China. The research indicates that hospital leadership’s resource support, environmental support, and decision-making support have a significantly positive impact on the job satisfaction of healthcare personnel. These forms of support can assist healthcare personnel in better adapting to the constantly changing work environment and demands, thereby enhancing their job satisfaction, and ultimately, positively influencing the overall performance of the hospital and the quality of patient care.

Our research indicates that, using the same MSQ to measure job satisfaction, the job satisfaction among healthcare personnel in China’s top-tier hospitals is at 74.33%, which is higher than the results of a nationwide survey in 2016 (48.22%) [ 38 ] and a survey among doctors in Shanghai in 2013 (35.2%) in China [ 39 ]. This improvement is likely due to the Chinese government’s recent focus on healthcare personnel’s compensation and benefits, along with corresponding improvement measures, which have increased their job satisfaction. It’s worth noting that while job satisfaction among healthcare personnel in China’s top-tier hospitals is higher than the national average in China, it is slightly lower than the job satisfaction of doctors in the United States, as measured by the MSQ (81.73%) [ 40 ]. However, when compared to the job satisfaction by the MSQ of doctors in Southern Nigeria (26.7%) [ 32 ], nurses in South Korea (65.89%) [ 41 ], and nurses in Iran (59.7%) [ 42 ], the level of job satisfaction among healthcare personnel in China’s top-tier hospitals is significantly higher. This suggests that China has achieved some level of success in improving healthcare personnel’s job satisfaction. Studies have shown that for healthcare professionals, job satisfaction is influenced by work conditions, compensation, and opportunities for promotion, with varying levels of satisfaction observed across different cultural backgrounds and specialties [ 29 , 43 ]. Furthermore, the observed differences in job satisfaction levels can be influenced by cultural factors unique to China, including hierarchical workplace structures and the emphasis on collective well-being over individual recognition.

Leadership support can influence employees’ work attitudes and emotions. Effective leaders can establish a positive work environment, and provide constructive feedback, thereby enhancing employee job satisfaction [ 44 , 45 ]. Our research results show that clinical physicians perceive significantly lower levels of resource and environmental support compared to administrative staff and clinical nurses, while administrative staff perceive significantly lower levels of decision-making support compared to clinical physicians and clinical nurses. This difference can be attributed to their different roles and job nature within the healthcare team [ 9 ]. Nurses typically have direct patient care responsibilities, performing medical procedures, providing care, and monitoring patient conditions, making them in greater need of resource and environmental support to efficiently deliver high-quality care [ 46 ]. Doctors usually have responsibilities for clinical diagnosis and treatment, requiring better healthcare environments and resources due to their serious commitment to patients’ lives. Administrative staff often oversee the hospital’s day-to-day operations and management, including budgeting, resource allocation, and personnel management. Their work may be more organizationally oriented, involving strategic planning and management decisions. Therefore, they may require more decision-making support to succeed at the managerial level [ 47 ].

The job satisfaction of healthcare personnel is influenced by various factors, including the work environment, workload, career development, and leadership support [ 48 , 49 ]. When healthcare personnel are satisfied with their work, their job enthusiasm increases, contributing to higher patient satisfaction. Healthcare organizations should assess the leadership and management qualities of each hospital to enhance their leadership capabilities. This will directly impact employee satisfaction, retention rates, and patient satisfaction [ 50 ]. Resource support provided by leaders, such as data, human resources, financial resources, equipment resources, supplies (such as medications), and training opportunities, significantly influences the job satisfaction of healthcare personnel [ 51 ]. From a theoretical perspective, researchers believe that leaders’ behavior, by providing resources to followers, is one of the primary ways to influence employee satisfaction [ 7 ]. These resources can assist healthcare personnel in better fulfilling their job responsibilities, improving work efficiency, and thereby enhancing their job satisfaction.

In hospital organizations, leaders play a crucial role in shaping the work environment for healthcare personnel and providing decision-making support [ 52 , 53 ]. Hospital leaders are committed to ensuring the safety of the work environment for their employees by formulating and promoting policies and regulations. They also play a key role in actively identifying and addressing issues in the work environment, including conflicts among employees and resource shortages. These initiatives are aimed at continuously improving working conditions, enabling healthcare personnel to better fulfill their duties [ 54 ]. The actions of these leaders not only contribute to improving the job satisfaction of healthcare personnel but also create the necessary foundation for providing high-quality healthcare services.

It is worth noting that our research results show that in the context of leading public hospitals in China, leadership support for research, encouragement of innovation, and decision-making do not appear to significantly enhance the job satisfaction of healthcare personnel, which differs from some international literature [ 23 , 55 , 56 ]. International studies often suggest that fostering innovation is particularly important in influencing healthcare personnel’s job satisfaction [ 57 , 58 ]. Inspiring a shared vision is particularly important in motivating nursing staff and enhancing their job satisfaction and organizational commitment [ 59 ]. This may reflect the Chinese healthcare personnel’s perception of leadership’s innovation encouragement, scientific research encouragement, and decision support, but it does not significantly improve their job satisfaction. However, material support (resources and environment) can significantly increase their satisfaction.

Strengths and limitations of this study

For the first time, we analyzed the role of perceived leadership support in enhancing healthcare providers in China’s leading public hospitals. We assessed the impact of perceived leadership on healthcare professional satisfaction across five dimensions: resources, environment, decision-making, research, and innovation. The sample includes physicians, nurses, and administrative staff, providing a comprehensive understanding of leadership support’s impact on diverse positions and professional groups.

However, it’s important to note that this study exclusively recruited healthcare professionals from three leading public hospitals in China, limiting the generalizability of the research findings. Additionally, the cross-sectional nature of the study means that causality cannot be established. There is also a potential for response bias as the data were collected through self-reported questionnaires. Furthermore, the use of convenience sampling may introduce selection bias, and the reliance on electronic questionnaires may exclude those less comfortable with digital technology.

Implications for research and practice

The results of this study provide important empirical evidence supporting the significance of leadership assessment in the context of Chinese hospitals. Specifically, the findings underscore the critical role of leadership support in enhancing job satisfaction among healthcare professionals, which has implications for hospital operational efficiency and the quality of patient care. For hospital administrators and policymakers, the study highlights the need to prioritize leadership development programs that focus on the three dimensions of leadership support: resources, environment, and decision-making. Implementing targeted interventions in these areas can lead to improved job satisfaction. Moreover, this study serves as a foundation for comparative research across different cultural and organizational contexts, contributing to a deeper understanding of how leadership practices can be optimized to meet the unique needs of healthcare professionals in various regions.

Our study found a close positive correlation between leadership support in Chinese leading public hospitals and employee job satisfaction. They achieve this by providing ample resources to ensure employees can effectively fulfill their job responsibilities. Furthermore, they create a comfortable work environment and encourage active employee participation. By nurturing outstanding leadership and support, hospitals can enhance employee job satisfaction, leading to improved overall performance and service quality. This is crucial for providing high-quality healthcare and meeting patient needs.

Data availability

Data are available upon reasonable request.

Kempster S, Parry KW. Grounded theory and leadership research: a critical realist perspective. Leadersh Q. 2011;22(1):106–20.

Article   Google Scholar  

Northouse PG. Leadership: Theory and Practice: Leadership: Theory and Practice; 2014.

Mosadeghrad AM. Factors affecting medical service quality. Iran J Public Health. 2014;43(2):210.

PubMed   PubMed Central   Google Scholar  

de Vries JM, Curtis EA. Nursing leadership in Ireland: experiences and obstacles. Leadersh Health Serv. 2019;32(3):348–63.

Boamah SA, Laschinger HKS, Wong C, Clarke S. Effect of transformational leadership on job satisfaction and patient safety outcomes. Nurs Outlook. 2018;66(2):180–9.

Article   PubMed   Google Scholar  

Likert R. The human organization: its management and values. 1967.

Inceoglu I, Thomas G, Chu C, Plans D, Gerbasi A. Leadership behavior and employee well-being: an integrated review and a future research agenda. Leadersh Q. 2018;29(1):179–202.

Mendes L, Fradique MJJG. Influence of leadership on quality nursing care. Int J Health Care Qual Assur. 2014;27(5):439–50.

Shanafelt TD, Noseworthy JH, editors. Executive leadership and physician well-being: nine organizational strategies to promote engagement and reduce burnout. Mayo Clinic Proceedings; 2017: Elsevier.

Aiken LH, Clarke SP, Sloane DM, Sochalski J, Silber JH. Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction. JAMA. 2002;288(16):1987–93.

Cicolini G, Comparcini D, Simonetti V. Workplace empowerment and nurses’ job satisfaction: a systematic literature review. J Nurs Manag. 2014;22(7):855–71.

Rosta J, Aasland OG, Nylenna M. Changes in job satisfaction among doctors in Norway from 2010 to 2017: a study based on repeated surveys. BMJ open. 2019;9(9):e027891.

Article   PubMed   PubMed Central   Google Scholar  

Zhang Z, Shi G, Li L, Bian Y. Job satisfaction among primary care physicians in western China. BMC Fam Pract. 2020;21:1–10.

Balasubramanian M, Spencer AJ, Short SD, Watkins K, Chrisopoulos S, Brennan DS. Job satisfaction among ‘migrant dentists’ in Australia: implications for dentist migration and workforce policy. Aust Dent J. 2016;61(2):174–82.

Article   CAS   PubMed   Google Scholar  

Mascari C. Job satisfaction of doctors vs. researchers in the US University Hospital Environment: a comparative case study. Northcentral University; 2020.

Friedberg MW, Chen PG, Van Busum KR, Aunon F, Pham C, Caloyeras J et al. Factors affecting physician professional satisfaction and their implications for patient care, health systems, and health policy. Rand Health Q. 2014;3(4).

Nhung DTH, Linh TM. Identifying work-related factors influencing job satisfaction using job descriptive index questionnaire: a study of IT companies in Hanoi. J Int Econ Manage. 2021;21(1):63–85.

Gomez Garcia R, Alonso Sangregorio M, Lucía Llamazares Sánchez M. Evaluation of job satisfaction in a sample of Spanish social workers through the ‘Job satisfaction survey’scale. Eur J Social Work. 2018;21(1):140–54.

Walkowiak D, Staszewski R. The job satisfaction of Polish nurses as measured with the Minnesota satisfaction questionnaire. J Public Health Nurs Med Rescue. 2019;4:34–40.

Google Scholar  

Dyrbye LN, Major-Elechi B, Hays JT, Fraser CH, Buskirk SJ, West CP, editors. Relationship between organizational leadership and health care employee burnout and satisfaction. Mayo Clinic Proceedings; 2020: Elsevier.

Choi SL, Goh CF, Adam MBH, Tan OK. Transformational leadership, empowerment, and job satisfaction: the mediating role of employee empowerment. Hum Resour Health. 2016;14:1–14.

Liu W, Zhao S, Shi L, Zhang Z, Liu X, Li L, et al. Workplace violence, job satisfaction, burnout, perceived organisational support and their effects on turnover intention among Chinese nurses in tertiary hospitals: a cross-sectional study. BMJ open. 2018;8(6):e019525.

Wang X, Chontawan R, Nantsupawat R. Transformational leadership: effect on the job satisfaction of registered nurses in a hospital in China. J Adv Nurs. 2012;68(2):444–51.

Wang L, Tao H, Bowers BJ, Brown R, Zhang Y. When nurse emotional intelligence matters: how transformational leadership influences intent to stay. J Nurs Manag. 2018;26(4):358–65.

Adamopoulos IP. Job satisfaction in public health care sector, measures scales and theoretical background. Eur J Environ Public Health. 2022;6(2):em0116.

Montano D, Reeske A, Franke F, Hüffmeier J. Leadership, followers’ mental health and job performance in organizations: a comprehensive meta-analysis from an occupational health perspective. J Organizational Behav. 2017;38(3):327–50.

Carlson MA, Morris S, Day F, Dadich A, Ryan A, Fradgley EA, Paul C. Psychometric properties of leadership scales for health professionals: a systematic review. Implement Sci. 2021;16(1):85.

Aiken LH, Sermeus W, Van den Heede K, Sloane DM, Busse R, McKee M et al. Patient safety, satisfaction, and quality of hospital care: cross sectional surveys of nurses and patients in 12 countries in Europe and the United States. BMJ. 2012;344.

Cunningham R, Westover J, Harvey J. Drivers of job satisfaction among healthcare professionals: a quantitative review. Int J Healthc Manag. 2023;16(4):534–42.

Foster TC, Johnson JK, Nelson EC, Batalden PB. Using a Malcolm Baldrige framework to understand high-performing clinical microsystems. BMJ Qual Saf. 2007;16(5):334–41.

Shields JA, Jennings JL. Using the Malcolm Baldrige are we making progress survey for organizational self-assessment and performance improvement. J Healthc Qual. 2013;35(4):5–15.

Bello S, Adewole DA, Afolabi RF. Work facets predicting overall job satisfaction among resident doctors in selected teaching hospitals in Southern Nigeria: a Minnesota satisfaction Questionnaire Survey. J Occup Health Epidemiol. 2020;9(1):52–60.

Ozyurt A, Hayran O, Sur H. Predictors of burnout and job satisfaction among Turkish physicians. J Association Physicians. 2006;99(3):161–9.

Article   CAS   Google Scholar  

Wang YY, Xiong Y, Zhang Y, Li CY, Fu LL, Luo HL, Sun Y. Compassion fatigue among haemodialysis nurses in public and private hospitals in China. Int J Nurs Pract. 2022;28(1):e13011.

Jiang F, Hu L, Rakofsky J, Liu T, Wu S, Zhao P, et al. Sociodemographic characteristics and job satisfaction of psychiatrists in China: results from the first nationwide survey. Psychiatric Serv. 2018;69(12):1245–51.

Kendall MG. Note on bias in the estimation of autocorrelation. Biometrika. 1954;41(3–4):403–4.

Hinkle DE, Wiersma W, Jurs SG. Applied statistics for the behavioral sciences. Houghton Mifflin college division; 2003.

Zhou H, Han X, Zhang J, Sun J, Hu L, Hu G et al. Job satisfaction and Associated Factors among medical staff in Tertiary Public hospitals: results from a National Cross-sectional Survey in China. Int J Environ Res Public Health. 2018;15(7).

Liu J, Yu W, Ding T, Li M, Zhang L. Cross-sectional survey on job satisfaction and its associated factors among doctors in tertiary public hospitals in Shanghai, China. BMJ Open. 2019;9(3):e023823.

Ritter B. Senior healthcare leaders: exploring the relationship between the rates of job satisfaction and person-job value congruence. Int J Healthc Manag. 2021;14(1):85–90.

Shin S, Oh SJ, Kim J, Lee I, Bae SH. Impact of nurse staffing on intent to leave, job satisfaction, and occupational injuries in Korean hospitals: a cross-sectional study. Nurs Health Sci. 2020;22(3):658–66.

Shahrbabaki PM, Abolghaseminejad P, Lari LA, Zeidabadinejad S, Dehghan M. The relationship between nurses’ psychological resilience and job satisfaction during the COVID-19 pandemic: a descriptive-analytical cross-sectional study in Iran. BMC Nurs. 2023;22(1):137.

Shanafelt TD, Hasan O, Dyrbye LN, Sinsky C, Satele D, Sloan J, West CP. Changes in Burnout and Satisfaction With Work-Life Balance in Physicians and the General US Working Population Between 2011 and 2014. Mayo Clin Proc. 2015;90(12):1600-13.

Laschinger HKS, Wong CA, Grau AL. The influence of authentic leadership on newly graduated nurses’ experiences of workplace bullying, burnout and retention outcomes: a cross-sectional study. Int J Nurs Stud. 2012;49(10):1266–76.

Chang C-S. Moderating effects of nurses’ organizational support on the relationship between job satisfaction and organizational commitment. West J Nurs Res. 2015;37(6):724–45.

Lake ET, Friese CR. Variations in nursing practice environments: relation to staffing and hospital characteristics. Nurs Res. 2006;55(1):1–9.

Bååthe F, Erik Norbäck L. Engaging physicians in organisational improvement work. J Health Organ Manag. 2013;27(4):479–97.

Zhang M, Zhu CJ, Dowling PJ, Bartram T. Exploring the effects of high-performance work systems (HPWS) on the work-related well-being of Chinese hospital employees. Int J Hum Resource Manage. 2013;24(16):3196–212.

Baek H, Han K, Ryu E. Authentic leadership, job satisfaction and organizational commitment: the moderating effect of nurse tenure. J Nurs Adm Manag. 2019;27(8):1655–63.

Robbins B, Davidhizar R. Transformational leadership in health care today. Health Care Manag. 2020;39(3):117–21.

Hussain MK, Khayat RAM. The impact of transformational leadership on job satisfaction and organisational commitment among hospital staff: a systematic review. J Health Manage. 2021;23(4):614–30.

Mete M, Goldman C, Shanafelt T, Marchalik D. Impact of leadership behaviour on physician well-being, burnout, professional fulfilment and intent to leave: a multicentre cross-sectional survey study. BMJ open. 2022;12(6):e057554.

Avolio BJ, Walumbwa FO, Weber TJ, Leadership. Current theories, research, and future directions. Ann Rev Psychol. 2009;60:421–49.

Zhang L-f, You L-m, Liu K, Zheng J, Fang J-b, Lu M-m, et al. The association of Chinese hospital work environment with nurse burnout, job satisfaction, and intention to leave. Nurs Outlook. 2014;62(2):128–37.

Cummings G, Estabrooks CA. The effects of hospital restructuring that included layoffs on individual nurses who remained employed: a systematic review of impact. Int J Sociol Soc Policy. 2003;23(8/9):8–53.

Laschinger HKS, Finegan J, Shamian J. The impact of workplace empowerment, organizational trust on staff nurses’ work satisfaction and organizational commitment. Health Care Manage Rev. 2001:7–23.

Wong CA, Laschinger HKS. The influence of frontline manager job strain on burnout, commitment and turnover intention: a cross-sectional study. Int J Nurs Stud. 2015;52(12):1824–33.

Alrowwad Aa, Abualoush SH. Masa’deh re. Innovation and intellectual capital as intermediary variables among transformational leadership, transactional leadership, and organizational performance. J Manage Dev. 2020;39(2):196–222.

Chiok Foong Loke J. Leadership behaviours: effects on job satisfaction, productivity and organizational commitment. J Nurs Adm Manag. 2001;9(4):191–204.

Download references

This study was funded by the Fundamental Research Funds for the Central Universities (2020-RC630-001), the Fundamental Research Funds for the Central Universities (3332022166), and the Chinese Academy of Medical Sciences (CAMS) Innovation Fund for Medical Sciences (2021-I2M-1-046).

Author information

Authors and affiliations.

Beijing Jishuitan Hospital, Capital Medical University, Beijing, 100035, China

Jinhong Zhao

School of Health Policy and Management, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, 100730, China

Jinhong Zhao, Tingfang Liu & Yuanli Liu

You can also search for this author in PubMed   Google Scholar

Contributions

JZ, TL, and YL designed the study. JZ collected the original data in China, reviewed the literature, performed the analyses, and wrote the first draft of the manuscript. TL and YL critically revised the manuscript. All authors contributed to the interpretation of data and the final approved version.

Corresponding authors

Correspondence to Tingfang Liu or Yuanli Liu .

Ethics declarations

Ethics approval.

This study was conducted according to the guidelines of the Declaration of Helsinki and was approved by the Chinese Academy of Medical Sciences & Peking Union Medical College Institutional Review Board (CAMS & PUMC-IRC-2020-026). The survey was distributed by department heads and included informed consent and survey materials. The informed consent form described the research objectives, assured anonymity, emphasized voluntary participation, and instructed participants to complete the questionnaire through the online system. The statement ‘No signature is required, completing the survey implies consent to participate in the study’ implies implied consent.

Patient and public involvement Statement

Patients or the public were not involved in the design, or conduct of our study.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Zhao, J., Liu, T. & Liu, Y. Leadership support and satisfaction of healthcare professionals in China’s leading hospitals: a cross-sectional study. BMC Health Serv Res 24 , 1016 (2024). https://doi.org/10.1186/s12913-024-11449-3

Download citation

Received : 07 January 2024

Accepted : 16 August 2024

Published : 02 September 2024

DOI : https://doi.org/10.1186/s12913-024-11449-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Hospital leadership
  • Satisfaction
  • Healthcare professional

BMC Health Services Research

ISSN: 1472-6963

research studies questionnaires

IMAGES

  1. What are the different types of questionnaires involved in research

    research studies questionnaires

  2. Research Questionnaire Sample in Qualitative Research

    research studies questionnaires

  3. The ultimate guide to great questionnaires

    research studies questionnaires

  4. Research Questionnaire

    research studies questionnaires

  5. Survey Examples For Research

    research studies questionnaires

  6. Research Questionnaire

    research studies questionnaires

VIDEO

  1. Questionnaires & Surveys

  2. Why there's so much confusion about nutrition research

  3. Questionnaires in a sociological research

  4. Research Methods: Interviews Vs Questionnaire

  5. Descriptive Research definition, types, and its use in education

  6. What is Questionnaire?Types of Questionnaire in Research .#Research methodology notes

COMMENTS

  1. Questionnaire Design

    Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  2. Designing and validating a research questionnaire

    In research studies, questionnaires are commonly used as data collection tools, either as the only source of information or in combination with other techniques in mixed-method studies. However, the quality and accuracy of data collected using a questionnaire depend on how it is designed, used, and validated. In this two-part series, we discuss ...

  3. Practical Guidelines to Develop and Evaluate a Questionnaire

    The front page of the questionnaire provides an overview of the research without using technical words. Further, it includes roles and responsibilities of the participants, contact details of researchers, list of research ethics (such as voluntary participation, confidentiality and withdrawal, risks and benefits), and informed consent for ...

  4. Hands-on guide to questionnaire research: Selecting, designing, and

    Questionnaires can be used as the sole research instrument (such as in a cross sectional survey) or within clinical trials or epidemiological studies. Randomised trials are subject to strict reporting criteria, 4 but there is no comparable framework for questionnaire research.

  5. Questionnaire

    Definition: A Questionnaire is a research tool or survey instrument that consists of a set of questions or prompts designed to gather information from individuals or groups of people. It is a standardized way of collecting data from a large number of people by asking them a series of questions related to a specific topic or research objective.

  6. Designing a Questionnaire for a Research Paper: A Comprehensive Guide

    A questionnaire is an important instrument in a research study to help the researcher collect relevant data regarding the research topic [48]. Google Forms was used to complete the online survey ...

  7. What Is a Questionnaire and How Is It Used in Research?

    A questionnaire is a research instrument consisting of a series of questions for the purpose of gathering information from respondents. Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, computer, or post. Questionnaires provide a relatively cheap, quick, and efficient way of ...

  8. How to design a questionnaire for research

    10. Test the Survey Platform: Ensure compatibility and usability for online surveys. By following these steps and paying attention to questionnaire design principles, you can create a well-structured and effective questionnaire that gathers reliable data and helps you achieve your research objectives.

  9. Questionnaire: Definition, How to Design, Types & Examples

    A research questionnaire is often mistaken for a survey - and many people use the term questionnaire and survey, interchangeably. ... While a questionnaire is a popular method to gather data for market research or other studies, there are a few disadvantages to using this method (although there are plenty of advantages to using a questionnaire ...

  10. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  11. LibGuides: Qualitative study design: Surveys & questionnaires

    Qualitative surveys aim to elicit a detailed response to an open-ended topic question in the participant's own words. Like quantitative surveys, there are three main methods for using qualitative surveys including face to face surveys, phone surveys, and online surveys. Each method of surveying has strengths and limitations. Face to face surveys.

  12. (PDF) Questionnaires and Surveys

    First, in terms of data collection, this study was obtained using a questionnaire method, that is, participants' self-report, which is the most common and popular method for quantitative research ...

  13. Writing Survey Questions

    Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in ...

  14. Questionnaires in Research: Their Role, Advantages, and Main Aspects

    Questionnaires are frequently used in quantitative marketing research and social research. A questionnaire is a series of questions asked to individuals to obtain statistically useful information ...

  15. Questionnaires

    Questionnaires can be classified as both, quantitative and qualitative method depending on the nature of questions. Specifically, answers obtained through closed-ended questions (also called restricted questions) with multiple choice answer options are analyzed using quantitative methods. Research findings in this case can be illustrated using ...

  16. How to Develop a Questionnaire for Research: 15 Steps

    Come up with a research question. It can be one question or several, but this should be the focal point of your questionnaire. Develop one or several hypotheses that you want to test. The questions that you include on your questionnaire should be aimed at systematically testing these hypotheses. 2.

  17. Survey studies and questionnaires

    The purpose of this chapter is to highlight the characteristics of survey and questionnaire studies, identify important variables in creating survey studies, and illustrate examples in the present literature. We will identify key factors in designing a survey study, designate research team roles, emphasize the benefits and limitations of ...

  18. 10 Research Question Examples to Guide your Research Project

    The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

  19. Hands-on guide to questionnaire research: Administering, analysing, and

    PMB has taught research methods in a primary care setting for the past 13 years, specialising in practical approaches and using the experiences and concerns of researchers and participants as the basis of learning. This series of papers arose directly from questions asked about real questionnaire studies.

  20. Cognitive interviewing to improve infant and young child dietary

    The study used a cognitive interviewing approach to meet the following objectives: 1) To assess mothers' comprehension, retrieval, judgement, and response for the IYCF dietary assessment module questions in the Nepal DHS-8 Woman's Questionnaire; 2) to assess any cognitive challenges or benefits of a longer introduction to the IYCF dietary ...

  21. Questionnaire Design

    Revised on 10 October 2022. A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information. Questionnaires are commonly used in market research as well as in the social and health sciences.

  22. Developing Surveys on Questionable Research Practices: Four ...

    The public revelations of research fraud and non-replicable findings (Berggren & Karabag, 2019; Levelt et al., 2012; Nosek et al., 2022) have created a lively interest in studying research integrity.Most studies in this field tend to focus on questionable research practices, QRPs, rather than blatant fraud, which is less common and hard to study with rigorous methods (Butler et al., 2017).

  23. AAML1921; Bosutinib for Chronic Myeloid Leukemia

    Participation will last for up to 10 years. Subjectswill receive Bosutinib once daily. Study visits include the followingassessments: blood draws, bone marrow aspirate (if procedure is performed aspart of regular clinical care), collection of extra blood and tissue for futureresearch, and the completion of diaries and questionnaires.

  24. Association between changes in EEG alpha power and behavioural outcome

    The traditional research on evaluating psychological interventions has primarily relied on behavioural measurements (e.g. self-report questionnaires). This study aimed to investigate the effects of child-centred play therapy (CCPT) on autistic children at both behavioural and neural levels, as well as the association between the changes in ...

  25. Mobile phones are not linked to brain cancer, according to a major

    A systematic review into the potential health effects from radio wave exposure has shown mobile phones are not linked to brain cancer. The review was commissioned by the World Health Organization ...

  26. Insights into research activities of senior dental students in the

    Wherein n is the initial sample size = 384, N is the total population size (total number of final year dental students in the 3 schools) = 443. Based on this formula, the adjusted sample size was 206. An online, self-administered questionnaire comprising 13 questions was designed to assess the research experience of final year dental students in the participating schools.

  27. How to avoid sinking in swamp: exploring the intentions of digitally

    We distributed a structured questionnaire to collect data for the study. To ensure the reliability and validity of the questionnaire, we based the item development on the scales used in previous ...

  28. Researchers Develop Mechanism that Predicts Severity of Aggressive Form

    Cindy Matsen, MD, co-first author of the study, leader of the Breast and Gynecologic Disease Center at Huntsman Cancer Institute and associate professor in the Department of Surgery at the U, emphasizes the direct impact this research could have on patient care.She describes the study as highly relevant to addressing a major challenge in breast cancer treatment, with the potential of creating ...

  29. Leadership support and satisfaction of healthcare professionals in

    Healthcare professionals' job satisfaction is a critical indicator of healthcare performance, pivotal in addressing challenges such as hospital quality outcomes, patient satisfaction, and staff retention rates. Existing evidence underscores the significant influence of healthcare leadership on job satisfaction. Our study aims to assess the impact of leadership support on the satisfaction of ...

  30. Achieving optical transparency in live animals with absorbing ...

    We hypothesized that strongly absorbing molecules can achieve optical transparency in live biological tissues. By applying the Lorentz oscillator model for the dielectric properties of tissue components and absorbing molecules, we predicted that dye molecules with sharp absorption resonances in the near-ultraviolet spectrum (300 to 400 nm) and blue region of the visible spectrum (400 to 500 nm ...