Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Nuffield Department of Primary Care Health Sciences, University of Oxford

Ten steps to producing a successful mixed methods dissertation in Evidence-Based Health Care

9 June 2017

Tips for students

This blog is part of a series for Evidence-Based Health Care MSc students undertaking their dissertations, by Research Assistant Alice Tompson.

Graphic image of a laptop, mouse, mobile phone, stationery and cup of coffee, viewed from above in primary colours

I thought Margaret Głogowska would be a great person to chat to about the opportunities and challenges of writing a mixed-methods thesis. Margaret has loads of research experience and co-coordinates the  Mixed Methods  Evidence-Based Health Care module.

Here are her top tips for writing a successful mixed methods dissertation:

1) Start writing as soon as you can

Beginning to write your dissertation can be daunting – a blank screen can be very intimidating! Margaret suggests the methods section can be a good place to start. Writing what you are doing, and how to you are doing it is, often more straightforward than writing why you’re doing it or describing or discussing your results. Plus it’ll help you identify any holes in your research plans.

2) Mixed methods isn’t a game of two halves

Margaret explains that a common mistake is to think of mixed methods studies as having to have two components. In fact, they have three: in addition to the quantitative and qualitative strands, successful dissertations will pull these together to provide insight greater than the sum of the parts. This doesn’t only relate to the results: be sure to include your plans for integration in your methods section too. This article by  Jenny Burt  gives some further advice on “ following the mixed methods trail ”.

3) Think about the structure

When writing up her own work, Margaret reflects on,  “What’s a good way to bring this together to answer my research question? ” You could follow a typical quantitative approach where each component is reported sequentially (i.e. quantitative, qualitative, integration). However, you could adopt a more qualitative approach organising your results by themes, each illustrated with qualitative and quantitative data. Think about which structure will enable you to present the fullest picture of the issue you are investigating.  In this article,  Alicia O’Cathain  and colleagues describe three approaches to integrating mixed methods data.

4) It’s not about the “right answer”

Don’t be disheartened if the results from the different components of your study are not in agreement. Instead of attempting to establish which is more valid, use dissonant findings as an opportunity to return to your datasets to explore the reasons for these differences. This will enrich your understanding and enable a full account to be presented.

5) Embrace the flexibility

Mixed methods studies are a relatively recent development that can take many forms. As a result, there are not currently any reporting standards that students can use to structure their work. Although this can be daunting, Margaret encourages students to use this freedom to work to their advantage. Be creative and flexible to enable you to present a rich, complete account of your work.

6) Be systematic and rigorous

Although mixed methods offer flexibility, this must not be at the expense of rigor or transparency.  When writing up provide enough detail for your examiners/ readers to be able to replicate your methods and analyses. Justify the approaches you took and the decisions you made.  Enable them to follow the story.

7) Read the literature

The field of mixed methods is advancing all the time. Refer to the literature for methodological developments, for example how to display data, and also to see how published studies reported their mixed method projects.

To get you started, here are three helpful papers Margaret uses as teaching examples:

  • Van den Bruel et al  (2016) C-reactive protein point-of-care testing in acutely ill children: a mixed methods study in primary care. Archives of Disease in Childhood 10.1136/archdischild-2015-309228
  • Moffat et al  (2006) Using quantitative and qualitative data in health services research – what happens when mixed method findings conflict? BMC Health Services Research, 6:28 doi:10.1186/1472-6963-6-28
  • Casey et al  (2014) A mixed methods study exploring the factors and behaviours that affect glycemic control following a structured education program: the Irish DAFNE study. Journal of Mixed Methods Research 10(2):182-203

She also recommends the work of  Alan Bryman , a pioneer in combining qualitative and quantitative research.

8) Fortune favours the prepared!

Keep your research notebook with you: it will allow you to keep track of ideas, useful references, and helpful conversations. Fortune favours the prepared so always keep your notebook close to hand!

9) Be concise

Word limits are a perennial issue in mixed methods research. Two methods plus integration means there is a lot of information to convey. No word can be superfluous and it may take several drafts to cut out the clutter. Use tables and appendices to  “make the most of your precious word count”.

10) Final steps – publishing your thesis

The value of mixed methods, particularly in applied health research, is increasingly being recognised.  Based on her own experience, Margaret suggests contacting journal editors for advice on how to tailor your manuscript for their particular audience to increase your chances of it being accepted.

  If you are interested in learning more about the Evidence-Based Health Care module: “Mixed- Methods in Health Research” take a look  here .

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Mixed Methods Research | Definition, Guide & Examples

Mixed Methods Research | Definition, Guide & Examples

Published on August 13, 2021 by Tegan George . Revised on June 22, 2023.

Mixed methods research combines elements of quantitative research and qualitative research in order to answer your research question . Mixed methods can help you gain a more complete picture than a standalone quantitative or qualitative study, as it integrates benefits of both methods.

Mixed methods research is often used in the behavioral, health, and social sciences, especially in multidisciplinary settings and complex situational or societal research.

  • To what extent does the frequency of traffic accidents ( quantitative ) reflect cyclist perceptions of road safety ( qualitative ) in Amsterdam?
  • How do student perceptions of their school environment ( qualitative ) relate to differences in test scores ( quantitative ) ?
  • How do interviews about job satisfaction at Company X ( qualitative ) help explain year-over-year sales performance and other KPIs ( quantitative ) ?
  • How can voter and non-voter beliefs about democracy ( qualitative ) help explain election turnout patterns ( quantitative ) in Town X?
  • How do average hospital salary measurements over time (quantitative) help to explain nurse testimonials about job satisfaction (qualitative) ?

Table of contents

When to use mixed methods research, mixed methods research designs, advantages of mixed methods research, disadvantages of mixed methods research, other interesting articles, frequently asked questions.

Mixed methods research may be the right choice if your research process suggests that quantitative or qualitative data alone will not sufficiently answer your research question. There are several common reasons for using mixed methods research:

  • Generalizability : Qualitative research usually has a smaller sample size , and thus is not generalizable. In mixed methods research, this comparative weakness is mitigated by the comparative strength of “large N,” externally valid quantitative research.
  • Contextualization: Mixing methods allows you to put findings in context and add richer detail to your conclusions. Using qualitative data to illustrate quantitative findings can help “put meat on the bones” of your analysis.
  • Credibility: Using different methods to collect data on the same subject can make your results more credible. If the qualitative and quantitative data converge, this strengthens the validity of your conclusions. This process is called triangulation .

As you formulate your research question , try to directly address how qualitative and quantitative methods will be combined in your study. If your research question can be sufficiently answered via standalone quantitative or qualitative analysis, a mixed methods approach may not be the right fit.

But mixed methods might be a good choice if you want to meaningfully integrate both of these questions in one research study.

Keep in mind that mixed methods research doesn’t just mean collecting both types of data; you need to carefully consider the relationship between the two and how you’ll integrate them into coherent conclusions.

Mixed methods can be very challenging to put into practice, and comes with the same risk of research biases as standalone studies, so it’s a less common choice than standalone qualitative or qualitative research.

Prevent plagiarism. Run a free check.

There are different types of mixed methods research designs . The differences between them relate to the aim of the research, the timing of the data collection , and the importance given to each data type.

As you design your mixed methods study, also keep in mind:

  • Your research approach ( inductive vs deductive )
  • Your research questions
  • What kind of data is already available for you to use
  • What kind of data you’re able to collect yourself.

Here are a few of the most common mixed methods designs.

Convergent parallel

In a convergent parallel design, you collect quantitative and qualitative data at the same time and analyze them separately. After both analyses are complete, compare your results to draw overall conclusions.

  • On the qualitative side, you analyze cyclist complaints via the city’s database and on social media to find out which areas are perceived as dangerous and why.
  • On the quantitative side, you analyze accident reports in the city’s database to find out how frequently accidents occur in different areas of the city.

In an embedded design, you collect and analyze both types of data at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.

This is a good approach to take if you have limited time or resources. You can use an embedded design to strengthen or supplement your conclusions from the primary type of research design.

Explanatory sequential

In an explanatory sequential design, your quantitative data collection and analysis occurs first, followed by qualitative data collection and analysis.

You should use this design if you think your qualitative data will explain and contextualize your quantitative findings.

Exploratory sequential

In an exploratory sequential design, qualitative data collection and analysis occurs first, followed by quantitative data collection and analysis.

You can use this design to first explore initial questions and develop hypotheses . Then you can use the quantitative data to test or confirm your qualitative findings.

“Best of both worlds” analysis

Combining the two types of data means you benefit from both the detailed, contextualized insights of qualitative data and the generalizable , externally valid insights of quantitative data. The strengths of one type of data often mitigate the weaknesses of the other.

For example, solely quantitative studies often struggle to incorporate the lived experiences of your participants, so adding qualitative data deepens and enriches your quantitative results.

Solely qualitative studies are often not very generalizable, only reflecting the experiences of your participants, so adding quantitative data can validate your qualitative findings.

Method flexibility

Mixed methods are less tied to disciplines and established research paradigms. They offer more flexibility in designing your research, allowing you to combine aspects of different types of studies to distill the most informative results.

Mixed methods research can also combine theory generation and hypothesis testing within a single study, which is unusual for standalone qualitative or quantitative studies.

Mixed methods research is very labor-intensive. Collecting, analyzing, and synthesizing two types of data into one research product takes a lot of time and effort, and often involves interdisciplinary teams of researchers rather than individuals. For this reason, mixed methods research has the potential to cost much more than standalone studies.

Differing or conflicting results

If your analysis yields conflicting results, it can be very challenging to know how to interpret them in a mixed methods study. If the quantitative and qualitative results do not agree or you are concerned you may have confounding variables , it can be unclear how to proceed.

Due to the fact that quantitative and qualitative data take two vastly different forms, it can also be difficult to find ways to systematically compare the results, putting your data at risk for bias in the interpretation stage.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

dissertation mixed method

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analyzed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analyzed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualize your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analyzed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, June 22). Mixed Methods Research | Definition, Guide & Examples. Scribbr. Retrieved September 18, 2024, from https://www.scribbr.com/methodology/mixed-methods-research/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, writing strong research questions | criteria & examples, what is quantitative research | definition, uses & methods, what is qualitative research | methods & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Dissertation Research—Planning, Researching, Publishing

  • Getting Started
  • Find Dissertations
  • Dissertation Process
  • Library Database Searching
  • Staying Current
  • Qualitative
  • Quantitive Research
  • Mixed Methods Research
  • Dissertation & Fellowship Funding

Mixed methods research is an approach that combines both quantitative and qualitative forms. It involves philosophical assumptions, and the mixing of qualitative and quantitative approaches in tandem so that the overall strength of a study is greater than either qualitative or quantitative methods ( Creswell, 2007 ) .

Video: Mixed Methods Research

Below is a sampling of books on the subject of "mixed methods research" owned by GW and consortium libraries. Click the book image and it will take you to the item in the library catalog, where you can request it.

dissertation mixed method

  • << Previous: Quantitive Research
  • Next: Analysis >>
  • Last Updated: Jun 5, 2024 2:48 PM
  • URL: https://libguides.gwu.edu/dissertation

Instant insights, infinite possibilities

  • What is mixed methods research?

Last updated

20 February 2023

Reviewed by

Miroslav Damyanov

Short on time? Get an AI generated summary of this article instead

By blending both quantitative and qualitative data, mixed methods research allows for a more thorough exploration of a research question. It can answer complex research queries that cannot be solved with either qualitative or quantitative research .

Analyze your mixed methods research

Dovetail streamlines analysis to help you uncover and share actionable insights

Mixed methods research combines the elements of two types of research: quantitative and qualitative.

Quantitative data is collected through the use of surveys and experiments, for example, containing numerical measures such as ages, scores, and percentages. 

Qualitative data involves non-numerical measures like beliefs, motivations, attitudes, and experiences, often derived through interviews and focus group research to gain a deeper understanding of a research question or phenomenon.

Mixed methods research is often used in the behavioral, health, and social sciences, as it allows for the collection of numerical and non-numerical data.

  • When to use mixed methods research

Mixed methods research is a great choice when quantitative or qualitative data alone will not sufficiently answer a research question. By collecting and analyzing both quantitative and qualitative data in the same study, you can draw more meaningful conclusions. 

There are several reasons why mixed methods research can be beneficial, including generalizability, contextualization, and credibility. 

For example, let's say you are conducting a survey about consumer preferences for a certain product. You could collect only quantitative data, such as how many people prefer each product and their demographics. Or you could supplement your quantitative data with qualitative data, such as interviews and focus groups , to get a better sense of why people prefer one product over another.

It is important to note that mixed methods research does not only mean collecting both types of data. Rather, it also requires carefully considering the relationship between the two and method flexibility.

You may find differing or even conflicting results by combining quantitative and qualitative data . It is up to the researcher to then carefully analyze the results and consider them in the context of the research question to draw meaningful conclusions.

When designing a mixed methods study, it is important to consider your research approach, research questions, and available data. Think about how you can use different techniques to integrate the data to provide an answer to your research question.

  • Mixed methods research design

A mixed methods research design  is   an approach to collecting and analyzing both qualitative and quantitative data in a single study.

Mixed methods designs allow for method flexibility and can provide differing and even conflicting results. Examples of mixed methods research designs include convergent parallel, explanatory sequential, and exploratory sequential.

By integrating data from both quantitative and qualitative sources, researchers can gain valuable insights into their research topic . For example, a study looking into the impact of technology on learning could use surveys to measure quantitative data on students' use of technology in the classroom. At the same time, interviews or focus groups can provide qualitative data on students' experiences and opinions.

  • Types of mixed method research designs

Researchers often struggle to put mixed methods research into practice, as it is challenging and can lead to research bias. Although mixed methods research can reveal differences or conflicting results between studies, it can also offer method flexibility.

Designing a mixed methods study can be broken down into four types: convergent parallel, embedded, explanatory sequential, and exploratory sequential.

Convergent parallel

The convergent parallel design is when data collection and analysis of both quantitative and qualitative data occur simultaneously and are analyzed separately. This design aims to create mutually exclusive sets of data that inform each other. 

For example, you might interview people who live in a certain neighborhood while also conducting a survey of the same people to determine their satisfaction with the area.

Embedded design

The embedded design is when the quantitative and qualitative data are collected simultaneously, but the qualitative data is embedded within the quantitative data. This design is best used when you want to focus on the quantitative data but still need to understand how the qualitative data further explains it.

For instance, you may survey students about their opinions of an online learning platform and conduct individual interviews to gain further insight into their responses.

Explanatory sequential design

In an explanatory sequential design, quantitative data is collected first, followed by qualitative data. This design is used when you want to further explain a set of quantitative data with additional qualitative information.

An example of this would be if you surveyed employees at a company about their satisfaction with their job and then conducted interviews to gain more information about why they responded the way they did.

Exploratory sequential design

The exploratory sequential design collects qualitative data first, followed by quantitative data. This type of mixed methods research is used when the goal is to explore a topic before collecting any quantitative data.

An example of this could be studying how parents interact with their children by conducting interviews and then using a survey to further explore and measure these interactions.

Integrating data in mixed methods studies can be challenging, but it can be done successfully with careful planning.

No matter which type of design you choose, understanding and applying these principles can help you draw meaningful conclusions from your research.

  • Strengths of mixed methods research

Mixed methods research designs combine the strengths of qualitative and quantitative data, deepening and enriching qualitative results with quantitative data and validating quantitative findings with qualitative data. This method offers more flexibility in designing research, combining theory generation and hypothesis testing, and being less tied to disciplines and established research paradigms.

Take the example of a study examining the impact of exercise on mental health. Mixed methods research would allow for a comprehensive look at the issue from different angles. 

Researchers could begin by collecting quantitative data through surveys to get an overall view of the participants' levels of physical activity and mental health. Qualitative interviews would follow this to explore the underlying dynamics of participants' experiences of exercise, physical activity, and mental health in greater detail.

Through a mixed methods approach, researchers could more easily compare and contrast their results to better understand the phenomenon as a whole.  

Additionally, mixed methods research is useful when there are conflicting or differing results in different studies. By combining both quantitative and qualitative data, mixed methods research can offer insights into why those differences exist.

For example, if a quantitative survey yields one result while a qualitative interview yields another, mixed methods research can help identify what factors influence these differences by integrating data from both sources.

Overall, mixed methods research designs offer a range of advantages for studying complex phenomena. They can provide insight into different elements of a phenomenon in ways that are not possible with either qualitative or quantitative data alone. Additionally, they allow researchers to integrate data from multiple sources to gain a deeper understanding of the phenomenon in question.  

  • Challenges of mixed methods research

Mixed methods research is labor-intensive and often requires interdisciplinary teams of researchers to collaborate. It also has the potential to cost more than conducting a stand alone qualitative or quantitative study . 

Interpreting the results of mixed methods research can be tricky, as it can involve conflicting or differing results. Researchers must find ways to systematically compare the results from different sources and methods to avoid bias.

For example, imagine a situation where a team of researchers has employed an explanatory sequential design for their mixed methods study. After collecting data from both the quantitative and qualitative stages, the team finds that the two sets of data provide differing results. This could be challenging for the team, as they must now decide how to effectively integrate the two types of data in order to reach meaningful conclusions. The team would need to identify method flexibility and be strategic when integrating data in order to draw meaningful conclusions from the conflicting results.

  • Advanced frameworks in mixed methods research

Mixed methods research offers powerful tools for investigating complex processes and systems, such as in health and healthcare.

Besides the three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent parallel—you can use one of the four advanced frameworks to extend mixed methods research designs. These include multistage, intervention, case study , and participatory. 

This framework mixes qualitative and quantitative data collection methods in stages to gather a more nuanced view of the research question. An example of this is a study that first has an online survey to collect initial data and is followed by in-depth interviews to gain further insights.

Intervention

This design involves collecting quantitative data and then taking action, usually in the form of an intervention or intervention program. An example of this could be a research team who collects data from a group of participants, evaluates it, and then implements an intervention program based on their findings .

This utilizes both qualitative and quantitative research methods to analyze a single case. The researcher will examine the specific case in detail to understand the factors influencing it. An example of this could be a study of a specific business organization to understand the organizational dynamics and culture within the organization.

Participatory

This type of research focuses on the involvement of participants in the research process. It involves the active participation of participants in formulating and developing research questions, data collection, and analysis.

An example of this could be a study that involves forming focus groups with participants who actively develop the research questions and then provide feedback during the data collection and analysis stages.

The flexibility of mixed methods research designs means that researchers can choose any combination of the four frameworks outlined above and other methodologies , such as convergent parallel, explanatory sequential, and exploratory sequential, to suit their particular needs.

Through this method's flexibility, researchers can gain multiple perspectives and uncover differing or even conflicting results when integrating data.

When it comes to integration at the methods level, there are four approaches.

Connecting involves collecting both qualitative and quantitative data during different phases of the research.

Building involves the collection of both quantitative and qualitative data within a single phase.

Merging involves the concurrent collection of both qualitative and quantitative data.

Embedding involves including qualitative data within a quantitative study or vice versa.

  • Techniques for integrating data in mixed method studies

Integrating data is an important step in mixed methods research designs. It allows researchers to gain further understanding from their research and gives credibility to the integration process. There are three main techniques for integrating data in mixed methods studies: triangulation protocol, following a thread, and the mixed methods matrix.

Triangulation protocol

This integration method combines different methods with differing or conflicting results to generate one unified answer.

For example, if a researcher wanted to know what type of music teenagers enjoy listening to, they might employ a survey of 1,000 teenagers as well as five focus group interviews to investigate this. The results might differ; the survey may find that rap is the most popular genre, whereas the focus groups may suggest rock music is more widely listened to. 

The researcher can then use the triangulation protocol to come up with a unified answer—such as that both rap and rock music are popular genres for teenage listeners. 

Following a thread

This is another method of integration where the researcher follows the same theme or idea from one method of data collection to the next. 

A research design that follows a thread starts by collecting quantitative data on a specific issue, followed by collecting qualitative data to explain the results. This allows whoever is conducting the research to detect any conflicting information and further look into the conflicting information to understand what is really going on.

For example, a researcher who used this research method might collect quantitative data about how satisfied employees are with their jobs at a certain company, followed by qualitative interviews to investigate why job satisfaction levels are low. They could then use the results to explore any conflicting or differing results, allowing them to gain a deeper understanding of job satisfaction at the company. 

By following a thread, the researcher can explore various research topics related to the original issue and gain a more comprehensive view of the issue.

Mixed methods matrix

This technique is a visual representation of the different types of mixed methods research designs and the order in which they should be implemented. It enables researchers to quickly assess their research design and adjust it as needed. 

The matrix consists of four boxes with four different types of mixed methods research designs: convergent parallel, explanatory sequential, exploratory sequential, and method flexibility. 

For example, imagine a researcher who wanted to understand why people don't exercise regularly. To answer this question, they could use a convergent parallel design, collecting both quantitative (e.g., survey responses) and qualitative (e.g., interviews) data simultaneously.

If the researcher found conflicting results, they could switch to an explanatory sequential design and collect quantitative data first, then follow up with qualitative data if needed. This way, the researcher can make adjustments based on their findings and integrate their data more effectively.

Mixed methods research is a powerful tool for understanding complex research topics. Using qualitative and quantitative data in one study allows researchers to understand their subject more deeply. 

Mixed methods research designs such as convergent parallel, explanatory sequential, and exploratory sequential provide method flexibility, enabling researchers to collect both types of data while avoiding the limitations of either approach alone.

However, it's important to remember that mixed methods research can produce differing or even conflicting results, so it's important to be aware of the potential pitfalls and take steps to ensure that data is being correctly integrated. If used effectively, mixed methods research can offer valuable insight into topics that would otherwise remain largely unexplored.

What is an example of mixed methods research?

An example of mixed methods research is a study that combines quantitative and qualitative data. This type of research uses surveys, interviews, and observations to collect data from multiple sources.

Which sampling method is best for mixed methods?

It depends on the research objectives, but a few methods are often used in mixed methods research designs. These include snowball sampling, convenience sampling, and purposive sampling. Each method has its own advantages and disadvantages.

What is the difference between mixed methods and multiple methods?

Mixed methods research combines quantitative and qualitative data in a single study. Multiple methods involve collecting data from different sources, such as surveys and interviews, but not necessarily combining them into one analysis. Mixed methods offer greater flexibility but can lead to differing or conflicting results when integrating data.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 22 August 2024

Last updated: 5 February 2023

Last updated: 16 August 2024

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.

Get started for free

Dissertation Guide Readings

  • Action Research
  • Appreciative Inquiry
  • Delphi Technique
  • Ethnography
  • Grounded Theory
  • Narrative Inquiry
  • Needs Assessment
  • Phenomenology
  • Program Assessment
  • Doing Analysis with Excel
  • Correlational Research
  • Experimental & Quasi-Experimental Research
  • Ex Post Facto
  • Factor Analysis
  • Q-Methodology

Mixed-Method

  • Mixed Methods (SAGE Methods Map)
  • (2019). SAGE mixed methods research , SAGE Publications Ltd.
  • DeCuir-Gunby, J., & Schutz, P. (2017). Developing a mixed methods proposal: A practical guide for beginning researchers , SAGE Publications Ltd.
  • Hesse-Biber, S. N. (2010). Mixed methods research : Merging theory with practice . Guilford Publication.
  • Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A research paradigm whose time has come. Educational Researcher, 33 (7), 14-26.
  • Onwuegbuzie, A. J., & Leech, N. L. (2006). Linking research questions to mixed methods data analysis procedures (1). The Qualitative Report, 11 (3), 474-498.
  • Plano Clark, V., & Ivankova, N. (2016). Mixed methods research: A guide to the field , SAGE Publications Ltd.
  • Tashakkori, A., & Teddlie, C. (2010). SAGE handbook of mixed methods in social & behavioral research (2nd ed.). SAGE Publications, Inc.
  • Terrell, S. R. (2012). Mixed-methods research methodologies. The Qualitative Report, 17 (1), 254-280.

Additional Readings

These texts are unavailable through the University Library, though you may be able to purchase them online or through a local retailer.

  • Creswell, J. W. (2006). Understanding mixed methods research . Sage.
  • Creswell, J. W., & Clark, P. V. L. (2011). Designing and conducting mixed methods research (2nd ed.). Sage.
  • Teddlie, C., & Tashakkori, A. (Eds.). (2009). Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences . Sage.

Note: We do our best to ensure these citations are correct within our system's constraints. We suggest checking your citations to make sure they meet current APA standards.

  • APA Style: Reference Examples This link opens in a new window
  • << Previous: Q-Methodology
  • Last Updated: Aug 28, 2024 9:04 AM
  • URL: https://library.phoenix.edu/dissertation_guide_readings

Designing a Research Proposal in Mixed-Method Approach

  • First Online: 27 October 2022

Cite this chapter

dissertation mixed method

  • Lokasundari Vijaya Sankar 4  

3063 Accesses

A research proposal is an important document that outlines a plan for a research study. It should contain pertinent and sufficient information for the application of grants, scholarships, these proposals, and other scientific studies to be examined and approved by a panel of examiners. A research proposal should first introduce the topic of study and its importance to the scientific community. It will further give an argument as to why the study is important and outline the objective and research questions that drive the study. A detailed plan for the study should be arrived at, describing theoretical bases, the sample for the study and the data collection and analysis methods. A plan of execution should also be included.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Re-conceptualize mixed methods research: introducing a new conceptual framework.

dissertation mixed method

How to Construct a Mixed Methods Research Design

Understanding the philosophical positions of classical and neopragmatists for mixed methods research.

Bulsara, C. (2014). Using A Mixed Methods Approach To Enhance And Validate Your Research . Notre Dame University. https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=dr%20caroline%20bulsara%20mixed%20methods

Creswell, J. W., & Plano Clark, V. L. (2011). Designing and conducting mixed methods research (2nd ed.). Sage.

Google Scholar  

Creswell, J. W. (2003). Research design: Qualitative, quantitative, and mixed methods approaches . Sage.

Download references

Author information

Authors and affiliations.

School of Communication, Taylor University, Subang Jaya, Malaysia

Lokasundari Vijaya Sankar

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Lokasundari Vijaya Sankar .

Editor information

Editors and affiliations.

Centre for Family and Child Studies, Research Institute of Humanities and Social Sciences, University of Sharjah, Sharjah, United Arab Emirates

M. Rezaul Islam

Department of Development Studies, University of Dhaka, Dhaka, Bangladesh

Niaz Ahmed Khan

Department of Social Work, School of Humanities, University of Johannesburg, Johannesburg, South Africa

Rajendra Baikady

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Sankar, L.V. (2022). Designing a Research Proposal in Mixed-Method Approach. In: Islam, M.R., Khan, N.A., Baikady, R. (eds) Principles of Social Research Methodology. Springer, Singapore. https://doi.org/10.1007/978-981-19-5441-2_31

Download citation

DOI : https://doi.org/10.1007/978-981-19-5441-2_31

Published : 27 October 2022

Publisher Name : Springer, Singapore

Print ISBN : 978-981-19-5219-7

Online ISBN : 978-981-19-5441-2

eBook Packages : Social Sciences Social Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Cookies & Privacy
  • GETTING STARTED
  • Introduction
  • FUNDAMENTALS

Qualitative, quantitative and mixed methods dissertations

What are they and which one should i choose.

In the sections that follow, we briefly describe the main characteristics of qualitative, quantitative and mixed methods dissertations. Rather than being exhaustive, the main goal is to highlight what these types of research are and what they involve. Whilst you read through each section, try and think about your own dissertation, and whether you think that one of these types of dissertation might be right for you. After reading about these three types of dissertation, we highlight some of the academic, personal and practical reasons why you may choose to take on one type over another.

  • Types of dissertation: Qualitative, quantitative and mixed methods dissertations
  • Choosing between types: Academic, personal and practical justifications

Types of dissertation

Whilst we describe the main characteristics of qualitative, quantitative and mixed methods dissertations, the Lærd Dissertation site currently focuses on helping guide you through quantitative dissertations , whether you are a student of the social sciences, psychology, education or business, or are studying medical or biological sciences, sports science, or another science-based degree. Nonetheless, you may still find our introductions to qualitative dissertations and mixed methods dissertations useful, if only to decide whether these types of dissertation are for you. We discuss quantitative dissertations , qualitative dissertations and mixed methods dissertations in turn:

Quantitative dissertations

When we use the word quantitative to describe quantitative dissertations , we do not simply mean that the dissertation will draw on quantitative research methods or statistical analysis techniques . Quantitative research takes a particular approach to theory , answering research questions and/or hypotheses , setting up a research strategy , making conclusions from results , and so forth. Classic routes that you can follow include replication-based studies , theory-driven research and data-driven dissertations . However, irrespective of the particular route that you adopt when taking on a quantitative dissertation, there are a number of core characteristics to quantitative dissertations:

They typically attempt to build on and/or test theories , whether adopting an original approach or an approach based on some kind of replication or extension .

They answer quantitative research questions and/or research (or null ) hypotheses .

They are mainly underpinned by positivist or post-positivist research paradigms .

They draw on one of four broad quantitative research designs (i.e., descriptive , experimental , quasi-experimental or relationship-based research designs).

They try to use probability sampling techniques , with the goal of making generalisations from the sample being studied to a wider population , although often end up applying non-probability sampling techniques .

They use research methods that generate quantitative data (e.g., data sets , laboratory-based methods , questionnaires/surveys , structured interviews , structured observation , etc.).

They draw heavily on statistical analysis techniques to examine the data collected, whether descriptive or inferential in nature.

They assess the quality of their findings in terms of their reliability , internal and external validity , and construct validity .

They report their findings using statements , data , tables and graphs that address each research question and/or hypothesis.

They make conclusions in line with the findings , research questions and/or hypotheses , and theories discussed in order to test and/or expand on existing theories, or providing insight for future theories.

If you choose to take on a quantitative dissertation , go to the Quantitative Dissertations part of Lærd Dissertation now. You will learn more about the characteristics of quantitative dissertations, as well as being able to choose between the three classic routes that are pursued in quantitative research: replication-based studies , theory-driven research and data-driven dissertations . Upon choosing your route, the Quantitative Dissertations part of Lærd Dissertation will help guide you through these routes, from topic idea to completed dissertation, as well as showing you how to write up quantitative dissertations.

Qualitative dissertations

Qualitative dissertations , like qualitative research in general, are often associated with qualitative research methods such as unstructured interviews, focus groups and participant observation. Whilst they do use a set of research methods that are not used in quantitative dissertations, qualitative research is much more than a choice between research methods. Qualitative research takes a particular approach towards the research process , the setting of research questions , the development and use of theory , the choice of research strategy , the way that findings are presented and discussed, and so forth. Overall, qualitative dissertations will be very different in approach, depending on the particular route that you adopt (e.g., case study research compared to ethnographies). Classic routes that you can follow include autoethnographies , case study research , ethnographies , grounded theory , narrative research and phenomenological research . However, irrespective of the route that you choose to follow, there are a number of broad characteristics to qualitative dissertations:

They follow an emergent design , meaning that the research process , and sometimes even the qualitative research questions that you tackle, often evolve during the dissertation process.

They use theory in a variety of ways - sometimes drawing on theory to help the research process; on other occasions, using theory to develop new theoretical insights ; sometimes both - but the goal is infrequently to test a particular theory from the outset.

They can be underpinned by one of a number of research paradigms (e.g., interpretivism , constructivism , critical theory , amongst many other research paradigms).

They follow research designs that heavily influence the choices you make throughout the research process, as well as the analysis and discussion of 'findings' (i.e., such research designs differ considerably depending on the route that is being followed, whether an autoethnography , case study research , ethnography , grounded theory , narrative research , phenomenological research , etc.).

They try to use theoretical sampling - a group of non-probability sampling techniques - with the goal of studying cases (i.e., people or organisations) that are most appropriate to answering their research questions.

They study people in-the-field (i.e., in natural settings ), often using multiple research methods , each of which generate qualitative data (e.g., unstructured interviews , focus groups , participant observation , etc.).

They interpret the qualitative data through the eyes and biases of the researcher , going back-and-forth through the data (i.e., an inductive process ) to identify themes or abstractions that build a holistic/gestalt picture of what is being studied.

They assess the quality of their findings in terms of their dependability , confirmability , conformability and transferability .

They present (and discuss ) their findings through personal accounts , case studies , narratives , and other means that identify themes or abstracts , processes , observations and contradictions , which help to address their research questions.

They discuss the theoretical insights arising from the findings in light of the research questions, from which tentative conclusions are made.

If you choose to take on a qualitative dissertation , you will be able to learn a little about appropriate research methods and sampling techniques in the Fundamentals section of Lærd Dissertation. However, we have not yet launched a dedicated section to qualitative dissertations within Lærd Dissertation. If this is something that you would like us to do sooner than later, please leave feedback .

Mixed methods dissertations

Mixed methods dissertations combine qualitative and quantitative approaches to research. Whilst they are increasingly used and have gained greater legitimacy, much less has been written about their components parts. There are a number of reasons why mixed methods dissertations are used, including the feeling that a research question can be better addressed by:

Collecting qualitative and quantitative data , and then analysing or interpreting that data, whether separately or by mixing it.

Conducting more than one research phase ; perhaps conducting qualitative research to explore an issue and uncover major themes, before using quantitative research to measure the relationships between the themes.

One of the problems (or challenges) of mixed methods dissertations is that qualitative and quantitative research, as you will have seen from the two previous sections, are very different in approach. In many respects, they are opposing approaches to research. Therefore, when taking on a mixed methods dissertation, you need to think particularly carefully about the goals of your research, and whether the qualitative or quantitative components (a) are more important in philosophical, theoretical and practical terms, and (b) should be combined or kept separate.

Again, as with qualitative dissertations, we have yet to launch a dedicated section of Lærd Dissertation to mixed methods dissertations . However, you will be able to learn about many of the quantitative aspects of doing a mixed methods dissertation in the Quantitative Dissertations part of Lærd Dissertation. You may even be able to follow this part of our site entirely if the only qualitative aspect of your mixed methods dissertation is the use of qualitative methods to help you explore an issue or uncover major themes, before performing quantitative research to examine such themes further. Nonetheless, if you would like to see a dedicated section to mixed methods dissertations sooner than later, please leave feedback .

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Mixed Methods Research | Definition, Guide, & Examples

Mixed Methods Research | Definition, Guide, & Examples

Published on 4 April 2022 by Tegan George . Revised on 25 October 2022.

Mixed methods research combines elements of quantitative research and qualitative research in order to answer your research question . Mixed methods can help you gain a more complete picture than a standalone quantitative or qualitative study, as it integrates benefits of both methods.

Mixed methods research is often used in the behavioral, health, and social sciences, especially in multidisciplinary settings and complex situational or societal research.

  • To what extent does the frequency of traffic accidents ( quantitative ) reflect cyclist perceptions of road safety ( qualitative ) in Amsterdam?
  • How do student perceptions of their school environment ( qualitative ) relate to differences in test scores ( quantitative ) ?
  • How do interviews about job satisfaction at Company X ( qualitative ) help explain year-over-year sales performance and other KPIs ( quantitative ) ?
  • How can voter and non-voter beliefs about democracy ( qualitative ) help explain election turnout patterns ( quantitative ) in Town X?
  • How do average hospital salary measurements over time (quantitative) help to explain nurse testimonials about job satisfaction (qualitative) ?

Table of contents

When to use mixed methods research, mixed methods research designs, benefits of mixed methods research, disadvantages of mixed methods research, frequently asked questions about mixed methods research.

Mixed methods research may be the right choice if your research process suggests that quantitative or qualitative data alone will not sufficiently answer your research question. There are several common reasons for using mixed methods research:

  • Generalisability : Qualitative research usually has a smaller sample size , and thus is not generalisable . In mixed methods research, this comparative weakness is mitigated by the comparative strength of ‘large N’, externally valid quantitative research.
  • Contextualisation: Mixing methods allows you to put findings in context and add richer detail to your conclusions. Using qualitative data to illustrate quantitative findings can help ‘put meat on the bones’ of your analysis.
  • Credibility: Using different methods to collect data on the same subject can make your results more credible. If the qualitative and quantitative data converge, this strengthens the validity of your conclusions. This process is called triangulation .

As you formulate your research question , try to directly address how qualitative and quantitative methods will be combined in your study. If your research question can be sufficiently answered via standalone quantitative or qualitative analysis, a mixed methods approach may not be the right fit.

Keep in mind that mixed methods research doesn’t just mean collecting both types of data; you need to carefully consider the relationship between the two and how you’ll integrate them into coherent conclusions. Mixed methods can be very challenging to put into practice, so it’s a less common choice than standalone qualitative or qualitative research.

Prevent plagiarism, run a free check.

There are different types of mixed methods research designs . The differences between them relate to the aim of the research, the timing of the data collection , and the importance given to each data type.

As you design your mixed methods study, also keep in mind:

  • Your research approach ( inductive vs deductive )
  • Your research questions
  • What kind of data is already available for you to use
  • What kind of data you’re able to collect yourself.

Here are a few of the most common mixed methods designs.

Convergent parallel

In a convergent parallel design, you collect quantitative and qualitative data at the same time and analyse them separately. After both analyses are complete, compare your results to draw overall conclusions.

  • On the qualitative side, you analyse cyclist complaints via the city’s database and on social media to find out which areas are perceived as dangerous and why.
  • On the quantitative side, you analyse accident reports in the city’s database to find out how frequently accidents occur in different areas of the city.

In an embedded design, you collect and analyse both types of data at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.

This is a good approach to take if you have limited time or resources. You can use an embedded design to strengthen or supplement your conclusions from the primary type of research design.

Explanatory sequential

In an explanatory sequential design, your quantitative data collection and analysis occurs first, followed by qualitative data collection and analysis.

You should use this design if you think your qualitative data will explain and contextualise your quantitative findings.

Exploratory sequential

In an exploratory sequential design, qualitative data collection and analysis occurs first, followed by quantitative data collection and analysis.

You can use this design to first explore initial questions and develop hypotheses. Then you can use the quantitative data to test or confirm your qualitative findings.

‘Best of both worlds’ analysis

Combining the two types of data means you benefit from both the detailed, contextualised insights of qualitative data and the generalisable, externally valid insights of quantitative data. The strengths of one type of data often mitigate the weaknesses of the other.

For example, solely quantitative studies often struggle to incorporate the lived experiences of your participants, so adding qualitative data deepens and enriches your quantitative results.

Solely qualitative studies are often not very generalisable, only reflecting the experiences of your participants, so adding quantitative data can validate your qualitative findings.

Method flexibility

Mixed methods are less tied to disciplines and established research paradigms. They offer more flexibility in designing your research, allowing you to combine aspects of different types of studies to distill the most informative results.

Mixed methods research can also combine theory generation and hypothesis testing within a single study, which is unusual for standalone qualitative or quantitative studies.

Mixed methods research is very labour-intensive. Collecting, analysing, and synthesising two types of data into one research product takes a lot of time and effort, and often involves interdisciplinary teams of researchers rather than individuals. For this reason, mixed methods research has the potential to cost much more than standalone studies.

Differing or conflicting results

If your analysis yields conflicting results, it can be very challenging to know how to interpret them in a mixed methods study. If the quantitative and qualitative results do not agree or you are concerned you may have confounding variables , it can be unclear how to proceed.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organisations.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analysed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analysed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualise your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analysed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

George, T. (2022, October 25). Mixed Methods Research | Definition, Guide, & Examples. Scribbr. Retrieved 18 September 2024, from https://www.scribbr.co.uk/research-methods/mixed-methods/

Is this article helpful?

Tegan George

Tegan George

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities

Carla a. green.

Center for Health, Kaiser Permanente Northwest

Naihua Duan

Professor Emeritus, Columbia University Medical Center

Robert D. Gibbons

Professor of Medicine & Health Studies, Director, Center for Health Statistics, University of Chicago

Kimberly E. Hoagwood

Cathy and Stephen Graham Professor of Child and Adolescent Psychiatry, Department of Child and Adolescent Psychiatry, New York University Langone Medical Center

Lawrence A. Palinkas

Albert G. and Frances Lomas Feldman Professor of Social Policy and Health, School of Social Work, University of Southern California

Jennifer P. Wisdom

Associate Vice President for Research, George Washington University

Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings.

Lack of translation of research findings into practice, and significant lags in translation time for those that are translated, have prompted health services researchers to study natural processes of diffusion of innovative findings and to develop more effective methods of encouraging adoption, dissemination and implementation (D & I) ( Berwick, 2003 ; Proctor et al., 2009 ; Westfall, Mold, & Fagnan, 2007 ). These efforts have led to more nuanced understandings of the processes and agents involved in diffusion and implementation, and what was once viewed as a vexing failure among clinicians and organizations to implement what was “evidence-based” is now more appropriately viewed as a failure to design implementation strategies that take into account the organizational, clinical, and social environments that affect uptake of research.

What is emerging is a more complex picture of the ways in which research findings and implementation processes are situated within organizational cultures and processes, within communities, and in concert with regional, state, and national policies. There is also increasing recognition that if care and health are to be improved, research must be designed, disseminated, and implemented in concert with stakeholders. This means learning about the experiences, perspectives, and needs of a full range of players, from policy-makers to agency directors, supervisors to front-line clinical staff, and from patients to their families. To achieve these goals, researchers have increasingly turned to mixed methods approaches to understand, collaborate with, and respond to stakeholders in the communities in which they intend their work to be disseminated and implemented ( Shortell, 1999 ). Mixed methods designs—those which systematically integrate qualitative and quantitative data—are intrinsically suited to the work of D & I research: They provide an array of methods and opportunities for collecting, triangulating, and analyzing information gathered from different stakeholder constituencies, and for developing a deeper understanding of the full range of perspectives and processes that affect adoption and implementation. Formative, process, and evaluative questions are all fair game ( Stetler et al., 2006 ), and mixed methods designs capitalize on the strengths of each method used while attempting to reduce each method’s weaknesses. That is, they address the limited generalizability that results from most qualitative approaches and the limited depth of understanding typical of findings derived from quantitative data by combining techniques from both approaches.

Integrating Qualitative and Quantitative data

In mixed methods studies, qualitative and quantitative data can be integrated at multiple stages—at the time of data collection, during analysis, or during interpretation. Data are integrated differently depending on whether the study collects qualitative and quantitative data sequentially or simultaneously, and on the extent to which the study places emphasis on each technique ( Creswell and Plano Clark, 2007 ). In some D & I mixed methods designs, for example, qualitative data can be analyzed to inform later quantitative data collection processes (sequential, exploratory models) or qualitative data collection that follows quantitative data collection can be analyzed to explain quantitative results (sequential, explanatory models). When both types of data are collected simultaneously, they may be analyzed together, each to inform the other, or one type of data may be transformed for use in analyses of the other data (e.g., qualitative data converted to categorical data for inclusion in quantitative analysis; quantitative data used to create classifications of individuals whose qualitative responses are then compared). Irrespective of the methods chosen, an important component of integration should be analyses of consistencies and inconsistencies in findings ( Creswell and Plano Clark, 2007 ). This involves searching for and evaluating inconsistencies within and across data sources. For example, in thematic analyses, it is important to identify and report on cases that contradict what appear to be common themes in the data; when comparing quantitative results to qualitative findings, inconsistencies might be a function of differential responses of subgroups to the intervention that can be further explored using existing data.

Because more detailed methods of analysis and reporting of qualitative and mixed methods studies are beyond the scope of this paper, we refer readers to existing comprehensive sources. 1 In the sections that follow, we review qualitative and quantitative approaches that can be integrated in different ways to produce strong mixed methods designs. We also cover hybrid methods—approaches that include, as essential components, multiple data sources and types, or analytic techniques that inherently integrate qualitative and quantitative approaches. Most hybrid methods have more recent origins, so have been used less frequently or not yet applied to D & I research. We include these methods because of their potential promise in this context.

Qualitative and Hybrid Approaches within Dissemination and Implementation Research

Creswell identifies five traditions of qualitative inquiry (biography, phenomenology, grounded theory, ethnography and case study) and five philosophical frameworks underlying these approaches (ontological, epistemological, axiological, rhetorical, methodological) ( Creswell, 1998 ). These traditions and approaches remain the underpinnings of qualitative inquiry within mixed methods D & I research. Within these frameworks, researchers have a wide range of mixed methods designs and data collection techniques from which to choose. Appropriately matching research and sampling design to research questions, data collection approaches, emphasis on qualitative versus quantitative data, and ordering of particular methods, are essential to producing interpretable and useful findings ( Creswell & Plano Clark, 2007 ; Palinkas et al., 2011a ; Palinkas, Horwitz, Chamberlain, Hurlburt, & Landsverk, 2011 ; Palinkas et al., 2013 ). In the sections that follow, we describe the qualitative methods most commonly used in D & I research, and describe some of the ways those methods can be integrated with, or augment, quantitative approaches.

Most qualitative inquiry in D & I research revolves around the collection and analysis of text or observational data. Text may be generated using interviews, result from notes taken during observations, or be drawn from existing documents, such as meeting minutes, correspondence, training materials, bylaws, standard practice manuals, organizational reports and websites, and books, magazines or newspapers. Analysis of text can include the following: (1) testing hypotheses (e.g., by way of content analysis); (2) identifying common meanings and interconnections such as among clinicians providing team-based care (e.g., through hermeneutic analysis); (3) discovering commonalities in the ways individuals talk or tell stories about an event such as an implementation process (e.g., using narrative analysis), or (4) identifying categories and concepts and linking those concepts into a formal theory of implementation roll-outs (e.g., using grounded theory) ( Bernard, 2011 ). Mixed methods D & I projects typically pair one or more of these qualitative methods with one or more quantitative methods to triangulate findings and improve validity, to aid understanding of quantitative results, or to include measures derived from qualitative data in quantitative analyses.

Interviews are among the most commonly employed qualitative data collection methods used in D & I research. They can be conducted individually or in groups, and can be semi-structured or structured in nature. Interviews have a place in all phases of D & I research, from formative and developmental assessments through implementation, process, and evaluative components.

Semi-structured interviews

Semi-structured interviews are typically exploratory, while structured interviews are more likely to be quantitative and confirmatory—that is, structured interviews typically have fixed responses deriving from conceptual models with clear hypotheses to be tested (see section on Formal Ethnographic Methods below, for an exception). In structured interviews, participants are asked the same questions in the same order and provided with the same set of responses. Semi-structured interviews allow the flexibility of qualitative data collection while at the same time providing more standardization than in naturalistic or unstructured interviews. Interview guides provide a set of questions and prompts to guide the interviewer, but the interviewer is allowed to follow the flow of conversation, asking questions as they occur naturally, and following-up with unanticipated questions when interviewees raise topics of particular interest or importance. In some cases, structured and semi-structured questions are included in the same interview allowing easy integration and triangulation of results, as the sample is the same for both qualitative and quantitative data collection.

In addition to the type of interview approach chosen, researchers must make choices about how they will frame semi-structured interviews. The questions that are asked, and the consequences of those choices, depend on what data are desired, and how those data will be integrated with other analyses, including whether responses will be coded for inclusion in quantitative analyses. Questions asking interviewees to generalize and compare their situation or experiences to those of others, will produce sociological, often abstract, answers in response, whereas when researchers seeking to understand the specifics of peoples’ experiences, interviewers need to ask questions that elicit the particulars of those individual experiences ( Chase, 2005 ). For this reason, if the goal is to understand the results of quantitative analyses, researchers may choose questions that lead to generalizations, while those developing questions as part of formative work that precedes and informs implementation of interventions may be more likely to use questions that result in detailed responses that will help to identify obstacles to implementation or opportunities for smoothing intervention roll-out. If the goal is to code qualitative responses so that they can be included in statistical analyses, interviewers must be sure to ask all participants these questions and probe for responses that can be clearly coded in either a binary or scalar fashion.

Key Informant Interviews

Key informant interviews can range from loosely organized conversations to semi-structured interviews—the distinction between these interviews and other approaches is that they are conducted with individuals who have extensive and important information ( Gilchrist & Williams, 1999 ) needed to carry out and understand processes targeted by D & I projects. That is, they are interviews with experts ( Marshall, 1996 ) who are selected because they have comprehensive knowledge because of their roles or because of their ability to translate, interpret, teach or mentor the researcher in the setting of interest ( Dicicco-Bloom & Crabtree, 2006 ). Although historically used in anthropology in lieu of broader sampling procedures ( Tremblay, 1957 ), in D & I research, they are most commonly used early in developmental evaluations, or after implementation, to take advantage of the informant’s in-depth knowledge of the setting and how its characteristics may affect or have affected implementation. Key informant interviews can also be used during other phases of evaluation as a relatively quick and simple method for assessing effects of context on interventions, or on intervention processes, progress, and outcomes. Such interviews, though extremely helpful in obtaining an “insider’s” view, also provide unique perspectives that may not be representative of other stakeholders. Nevertheless, the best key informants are keen observers who often understand and report a range of stakeholder perspectives, even if they do not agree with those perspectives. They can help guide data collection and generate hypotheses in addition to providing insight and aiding understanding at different project phases. Corroboration and examination of hypotheses resulting from key informant interviews are important methods of integrating findings using multiple methods ( Gilchrist & Williams, 1999 ).

Individual in-depth interviews

Compared to key informant interviews, individual in-depth interviews are typically designed to obtain deeper understandings of commonalities and differences among groups of individuals that share important characteristics or experiences, or to understand the perspectives of individuals at different points along a continuum of interest ( Miller & Crabtree, 1999 ). In-depth interviews, in particular, are intended to elicit personal, intimate, and detailed narratives ( Dicicco-Bloom & Crabtree, 2006 ). Their most important use in D & I projects is to shed light on the ways in which implementation processes interact with organizations and stakeholders to produce outcomes—both expected and unexpected. Recognizing that stakeholders’ primary responsibilities are rarely research focused, interview length and guides are constructed to address key research questions and be mindful of the exigencies experienced by those being interviewed. Therefore, interview guides for busy clinicians or administrators are often shorter and more narrowly focused; interviews with users of clinical services may be longer and, correspondingly, include questions that delve more deeply, with prompts to encourage additional exploration of interviewees’ experiences.

Semi-structured interview guides are often adapted over time as data are analyzed and more is learned about the research question and the strengths and weaknesses of the guide ( Charmaz, 2006 ). This adaptability makes semi-structured interviews—whether group or individual—extremely useful in mixed methods D & I research. Various designs are common, including interviews in the formative phase of a quantitative D & I project, explanatory interviews used to explain results obtained using other methods (typically quantitative) or to understand processes and implementation during rollout of a program, an intervention or a randomized controlled trial ( Creswell, Klassen, Plano Clark, & Smith, 2011 ; Palinkas et al., 2011b ; Palinkas et al., 2011 ; Stetler et al., 2006 ). Flexibility allows researchers to change or add questions in response to findings from interviews as well as other data sources. Similarly, findings can inform implementation while it is still in process, providing opportunities to alter approaches and increase the likelihood of successes in ever-changing clinical and social environments. Thus, most interview-based qualitative D &I research is flexible and iterative in nature, and opportunities for integration are many and varied. The increased rigor obtained from triangulation of interview and quantitative data increases confidence when results converge across data-collection methods, and this is a major benefit of this mixed-methods pairing ( Torrey, Bond, McHugo, & Swain, 2012 ).

Focus group interviews

Focus groups are collective conversations or group interviews that have at their core the assumption that group interaction will stimulate thoughts and ideas that might not be elicited in an individual interview ( Kamberelis & Dimitriadis, 2005 ). Typically, a group of individuals sharing common experiences or states (e.g., parents of children with mental health problems), or exposure to specific services, are asked about their perspectives, beliefs, or attitudes regarding their shared experiences. Like individual interviews, focus groups have a place in formative, process, implementation, and explanatory phases of projects. They have advantages over individual interviews in that they can be more cost-effective (more participants interviewed in the same time period) and because the group structure can be more stimulating, and thus may elicit a wider range of perspectives and ideas than individual interviews ( Morgan, 1993 ). Group interviews also have disadvantages compared to individual interviews. They are more difficult to coordinate, convene, and conduct; participants may be less likely to share sensitive information in group settings; and it may not be possible to explore topics in as in-depth a manner as in individual interviews ( Bernard, 2011 ). Moreover, in D & I research, focus group interviews are more likely to include stakeholders who know one another when compared to other research applications. This is particularly true when interviews target staff involved in service delivery or project implementation. In such situations, power relations become important, because truthful or complete responses may not be forthcoming from participants who feel that full disclosure might put them at-risk in some way (e.g., when supervisors are participants in the same group interview). If such situations cannot be avoided, alternative techniques that protect confidentiality, such as individual interviews or surveys may provide more accurate data. Focus group interview data can be integrated with D & I data from other sources in most of the ways that individual interview data can be integrated. An exception to this is the ability to convert qualitative data to binary or scalar indicators for use in quantitative analyses. Unless group perspectives can be characterized for composite measures, this is a limitation of group over individual interviews for mixed methods integration.

Observational Approaches: Participant Observation and Ethnographic Methods

Observation is fundamental to all scientific inquiry, though the types of observation differ substantially from observation that follows experimental interventions to non-interventionist techniques that seek to examine the natural course of events as they would occur without the presence of the observer ( Adler & Adler, 1998 ). Participant observation and ethnography are qualitative observational techniques, developed primarily in anthropology and sociology, that have significant value in D & I research. Observational research of this type has been evolving over time, with a shift in focus from the researcher as dispassionate observer to that of a participant observer interacting as a member of the community s/he is studying ( Angrosino, 2005 ).

Ethnography refers to both the process and the outcome of the research venture, which includes interpretations and descriptions of a particular group, organization, or system, including processes and behaviors at the levels studied, and details about the customs, norms, roles, and methods of interaction in the setting ( Creswell, 1998 ). In D & I research, ethnography is typically carried out through participant (or sometimes non-participant) observation and interviews, with the researcher immersing him/herself in the regular, daily, activities of the people involved in the setting while recording observations that document interactions, processes, tasks and outcomes, as well as personal reactions to these observations. In most cases, this is a long-term investment of time and energy, with regular observation occurring over weeks, months or years (though see the section on Rapid Ethnographic Assessment for an alternative model). Goals are to (a) produce a full picture of the ways in which a project was implemented, (b) describe the extent of fidelity to the intervention, and (c) identify and understand barriers and facilitators of implementation. Researchers often use key informant interviews, in-depth interviews and focus group interviews, Combined with text from other sources and available quantitative data, to create detailed accounts of the implementation process and its context. Taking careful, detailed, field notes is a critical component of ethnography, as is recording of interviews, review of relevant documents and quantitative data, and working to identify any personal biases that might affect conclusions. Searching for information that might contradict conclusions is also critical to good producing good ethnography.

Ethical concerns that are particular to participant observation must also be addressed. For example, difficulties can arise if key individuals do not consent to be observed, particularly when they interact with others who have consented. Ethnography is not for the faint of heart, but when done well, it can provide invaluable, comprehensive, information about implementation and dissemination that, when combined with quantitatively measured outcomes, can provide a complete picture of the processes and outcomes associated with D & I projects. Gabbay and Le May’s ethnographic work on clinical decision making in two primary care settings clearly shows how implementation of evidence based practices in routine clinical settings compares to expectations among researchers and administrations about the ways clinicians consume research and become aware of and use guidelines. Over two years of observations and interviews carried out in two small group practices, the authors found that clinicians relied on trusted sources such as colleagues, and free magazines, rather than directly accessing and appraising information and evidence from original sources or guidelines ( Gabbay & le May, 2004 ). Clinicians referred to guidelines to confirm existing practices, and when they had patients with challenging or unfamiliar problems. Guidelines were not routinely used, and little attention was paid to them when they were disseminated ( Gabbay & le May, 2004 ). The findings outlined in this report represent the kind of information essential to researchers developing frameworks designed to increase adoption of evidence based practices.

Rapid Ethnographic Assessment (REA)

Rapid ethnographic assessment is hybrid method and one of a group of rapid evaluation and assessment methods (REAM) that have significant potential for use in dissemination, implementation, and evaluation studies, particularly when time is of the essence and rigorous research results are needed ( Beebe, 2001 ; McNall & Foster-Fishman, 2007 ). REAM and rapid ethnographic assessment offer real-time evaluations that can provide quick assessments of local conditions that can be used to inform the design and implementation of effective interventions ( McNall & Foster-Fishman, 2007 ). Some projects can be completed in as little as eight weeks ( McNall & Foster-Fishman, 2007 ); methods typically include key informant and focus group interviews, targeted rapid quantitative assessment surveys, and intensive direct observation ( Trotter, Needle, Goosby, Bates, & Singer, 2001 ). Speed is gained by rapid data collection using multiple modalities, including quantitative data, with less complicated analytic approaches used for qualitative data (e.g., coding and analysis of interview notes rather than transcribed interviews). Advantages include the ability to obtain information about implementation and processes quickly, allowing modifications. A See Murray et al. ( Murray, Tapson, Turnbull, McCallum, & Little, 1994 ) and Needle et al. ( Needle et al., 2003 ) for examples.

Event Structure Analysis (ESA)

To our knowledge, this promising hybrid method has yet to be applied in D & I research. It offers a systematic, uniform, computer-assisted method ( Heise, 2012 ) of analyzing and interpreting narrative and observational data derived from ethnographic studies ( Corsaro & Heise, 1990 ; Heise, 1989 ). It appears particularly relevant for analyzing the kinds of organizational processes ( Pentland, 1999 ; Stevenson & Greenberg, 1998 ; Trumpy, 2008 ) that are often critical to D & I research. ESA breaks down the constituent parts of event sequences to develop graphical models that allow causal interpretations and explanations of processes that can then be tested and further refined. The strength of the method is that analysts, through the process of specifying the model, are forced to carefully consider contextual factors, causal ordering of events, the processes leading to each event, and the understanding, and interpretation of all events in the model ( Griffin & Korstad, 1998 ).

Formal ethnographic methods

Formal ethnographic methods are hybrid approaches that involve structured qualitative data collection and analytic techniques that are quasi-statistical in nature. Unlike semi-structured approaches, formal ethnographic methods require that the same stimuli (i.e., task or set of questions) be asked of all study participants. This is often referred to as structured interviewing ( Bernard, 2011 ) or systematic data collection ( Weller & Romeny, 1988 ). Tasks might include pile sorts, triads, rank ordering, semantic frames, or free listing. Data from tasks usually fall into one of three categories: Similarity data, in which participants provide estimates of how alike two or more items are; ordered data, in which participants provide an ordinal rating of items on a single conceptual scale; and performance data, in which responses provided by participants can be graded as “correct” or “incorrect”( Bernard, 2002 ).

Concept Mapping

Perhaps the most common form of formal ethnographic methods used in implementation research is “concept mapping.” Developed by William Trochim ( Trochim, 1989 ), this technique blends focus group interviewing and rank ordering with the quantitative techniques of multidimensional scaling and hierarchical cluster analysis. Concept mapping is a participatory qualitative research method that yields a conceptual framework for how a group views a particular topic. It uses inductive and structured group data collection processes to produce illustrative cluster maps depicting relationships among ideas in cluster form. It includes six distinct stages of activity: In the preparation stage, focal areas for investigation are identified and criteria for participant selection/recruitment are determined. In the generation stage, participants address the focal question and generate a list of items to be used in subsequent data collection and analysis. Qualitative data at this stage is obtained through “brainstorming” sessions. In the structuring stage, participants independently organize the list of generated items by sorting the items into piles based on perceived similarity. Each item is then rated in terms of its importance or usefulness to the focal question. In the representation or mapping stage, data are entered into specialized concept-mapping computer software ( Concept Systems, 2006 ), which is used to analyze participant data. Results include quantitative summaries of individual concepts, and visual representations or concept maps based on multidimensional scaling and hierarchical cluster analysis. In the interpretation stage, participants collectively process and qualitatively analyze the concept maps. This includes an assessment and discussion of cluster domains, evaluation of items that form each cluster, and discussion of content within each cluster. Based on this discussion, investigators may reduce the number of clusters. Finally, in the utilization stage, findings are discussed by investigators and study participants to determine how they best inform the original focal question.

Concept mapping has been used in several D & I projects. Aarons and colleagues ( Aarons, Wells, Zagursky, Fettes, & Palinkas, 2009 ) used the technique to solicit information on factors likely to affect implementation of evidence based practices in public sector mental health settings. Providers and consumers participated in focus groups and generated a series of 105 unique statements describing barriers and facilitators of evidence based practice implementation. Participants rated statements according to importance and changeability, and real-time multidimensional scaling and hierarchical cluster analysis were used to generate a visual display of how statements clustered. Participants assigned meanings to, and identified appropriate names for, each of the 14 clusters identified ( Aarons et al., 2009 ). This analysis uncovered a complex implementation process and multiple leverage points where change efforts would be most likely to improve implementation. Other examples of concept mapping in projects with D & I foci or D & I components include: Jabbar and Abelson ( Jabbar & Abelson, 2011 ), Arrington and colleagues ( Arrington et al., 2008 ) and Behar and Hydaker ( Behar & Hydaker, 2009 ).

Case Study Research

Case study research is, in most cases, a hybrid method that has long been used when there are needs to understand complex conditions and contextual factors using multiple sources of data that can be integrated to aid understanding ( Yin, 2003a ). Sources of data may include documents, archival records, interviews, direct observation, participant observation, physical artifacts, survey and other quantitative data ( Yin, 2003b ). Data are combined from multiple sources to create a clear and comprehensive picture of the context and demands of the research setting, the processes involved in intervention roll-out and how they change over time, and the ways the intervention affects clinical and organizational practices and outcomes among service users. Single case designs are useful as tests of theoretical or conceptual models when the case is (1) unique, extreme, or revelatory; (2) thought to be representative or typical; or (3) because there is a need for longitudinal study ( Yin, 2003b ). Multiple case designs, sometimes called comparative case study designs, have different goals: (1) to predict similar results across cases (replication), or (2) to predict contrasting results across cases based on a particular theory or conceptual model (theoretical replication) ( Yin, 2003b ). The rationale for multiple case studies is considered analogous to conducting multiple experiments on the same topic using the same conceptual model to replicate results ( Yin, 2003b ). Multiple case studies require more resources and time than single case studies, but may be particularly useful in the context of practical clinical trials and other projects with multiple implementation sites.

Case study methods are sometimes underappreciated because of a perceived lack of rigor, but this may result from confusion between case study research and case study teaching ( Yin, 2003b ). In case study teaching, characteristics of cases are altered or enhanced to facilitate learning, while such alterations are not acceptable in case study research ( Yin, 2003b ). Lack of generalizability, particularly with single case studies, is a limitation of the case study approach, though Yin ( Yin, 2003b ) argues that scientists rarely generalize from a single study or experiment and suggests that rigorous case studies should be viewed as generalizable to theoretical propositions rather than to populations or universes, and thus should be used to for analytic generalizations rather than statistical generalizations ( Yin, 2003b ). In this context, rigorous case studies provide a thorough and deep understanding of the case or cases under study—the types of information needed to understand why a particular implementation process succeeded, failed, or had mixed results. A variety of resources are available to support design and analysis of rigorous case studies, and to assess the quality and rigor of such research ( Caronna, 2010 ; Creswell, 1998 ; Stake, 2005 ; Yin, 1999 ; Yin, 2003a ; Yin, 2003b ). A recent case study of implementation of The Incredible Years parenting intervention in a residential substance abuse treatment program for women shows the value of such approaches in D & I research ( Aarons, Miller, Green, Perrott, & Bradway, 2012 ). The focus of the case study was on how the intervention was adapted to fit the setting and the implications of those adaptations on fidelity. Some changes were consistent with the approach and intent of the model while others were not. The authors use the case study to illustrate the need to develop implementation models that allow for greater flexibility and adaptation while staying true to critical frameworks and core elements.

Qualitative Comparative Analysis (QCA)

QCA is a special type of case study methodology based on principles of set theory and designed to elucidate cross-case patterns for studies with small sample sizes, using a “configurational” rather than a relationships-between-variables approach ( Ragin, 1997 ; Ragin, 1999b ; Ragin, Shulman, Weinberg, & Gran, 2003 ). That is, QCA provides a method of analyzing causal complexity by examining how different configurations of antecedent factors are necessary or sufficient for producing the outcomes of interest, rather than how a common set of antecedent conditions leads to a specific outcome ( Ragin, 1999a ; Ragin, 1999b ). Researchers using QCA select a case and collect data describing that case (e.g., using case study research methods), then construct truth tables that define causally relevant characteristics. Each case is reviewed to complete a row of the truth table, indicating whether each characteristic is true or false for that case. Once all cases are included and the truth table is complete, each row of the table is reviewed to identify patterns in causal combinations and to simplify the table by combining rows that show common patterns leading to the same outcome. When the table is fully simplified, an equation or set of equations can be written to describe the causal pathway(s). QCA has been used increasingly in health services research, but has had little application in D & I research. See Ford and colleagues ( Ford, Duncan, & Ginter, 2005 ) for one D & I example.

Quantitative Designs and Considerations within Mixed Methods Dissemination and Implementation Research

As a result of the strict requirements necessary to produce reliable and valid results of statistical analyses, quantitative components of D & I research are more constrained than qualitative approaches. That is, the structures associated with “real-world” implementation settings, procedures necessary for implementation, and the composition and methods of the intervention, combined with the hypotheses to be tested and the limits of specific statistical procedures, can significantly constrain study designs for quantitative outcomes. These limits suggest opportunities for mixed-methods integration: Quantitative requirements for valid and reliable measures that are used without adaptation can be tempered by qualitative data collection procedures that can be modified to explore unexpected findings or processes.

Efforts to conduct effectiveness research in routine clinical settings have also led to the development of less-rigid approaches and designs that are more acceptable to stakeholders, including non-randomized designs, need or risk-based assignment, interrupted time series designs, and pragmatic clinical trials. In the sections that follow, we review quantitative methods of particular relevance to D & I research, and discuss mixed methods applications for each approach that can fill gaps or address weaknesses associated with each approach.

Non-randomized Designs

The exigencies of particular settings or situations, and needs to improve participation and buy-in from different stakeholders, sometimes require the use of non-randomized designs. Several of these approaches are well-suited to mixed methods D & I research and, when threats to internal validity can be managed, are advantageous because they are more likely to be generalizable ( West et al., 2008 ).

Need- or risk-based assignment to intervention conditions

Need-based assignment (NBA) is a potentially promising method for managing clinical trials implementation in settings where randomization is not acceptable or possible ( Finkelstein, Levin, & Robbins, 1996a ; Finkelstein, Levin, & Robbins, 1996b ; West et al., 2008 ). NBA tends to be compatible with routine practice because, when properly designed, it replicates what frontline practitioners already do when developing treatment plans. In this context, formative qualitative assessments can help researchers determine the design and approach that is most appropriate for the settings in which implementation will take place. Pre-intervention assessments, administered to all participants, provide baseline need scores. Participants with scores exceeding a pre-specified threshold are offered high-intensity services (the experimental condition), while those below the threshold are offered low-intensity services (the comparison condition). Follow-up assessments are compared across conditions to assess intervention effects. Since the groups differ at baseline, a direct comparison of follow-up outcomes across intervention conditions does not provide a valid estimate of intervention effects. Rather, adjustment is made using statistical models applied to each group to account for the pre-existing differences in baseline needs and provide a more appropriate estimate of intervention effects.

A methodological challenge in application of need-based assignment in multi-level service structures is accommodating need at different levels. For example, some agencies may have greater needs for an intervention than others (i.e., lower functionality, higher stress) and thus should be prioritized for agency-level interventions. Additional prioritization may be warranted at provider and consumer levels (greater training needs for providers; higher symptom severity among children). To date, methods for applying needs-based assignment at multiple levels have not yet been developed. As is often the case, however, limitations of one approach suggest opportunities for others. In this case, qualitative data collection might be used to help formulate the most appropriate approaches for particular settings, and to assess need at organizational or other levels.

Regression-discontinuity and interrupted time series designs

These quasi-experimental designs present an alternative approach to analyses of data when randomization is not possible but existing data are available (e.g., through electronic medical records) or when data can be collected over time, prior to assessment of intervention outcomes ( Cook & Campbell, 1979 ; Imbens & Lemieux, 2008 ; Lee & Lemieux, 2010 ; Shadish, Cook, & Campbell, 2002 ; Thistlethwaite & Campbell, 1960 ; West et al., 2008 ). Regression discontinuity analysis can be applied to data collected from need-based allocation assignment; fitting separate regression curves to those who fall above the threshold and receive the high-intensity intervention, and those who fall below the threshold and receive the low-intensity intervention. The gap (“discontinuity”) between the two regression curves at the threshold is used to assess the intervention effect. Interrupted time series analysis is a special type of regression discontinuity analysis, with time used as the thresholding device. This method uses data collected from periods prior to interventions to establish trends; changes in trends following interventions can then be examined to establish evidence of intervention effects. Results of these types of designs often integrate nicely with qualitative process and evaluation data collected over the course of the study. Changes in trends over time, discontinuities identified following interventions, lags in effects, or lack of intervention effects can often be explained when qualitative process evaluation data have been collected simultaneously with quantitative data.

Pragmatic clinical trials: Experimental designs with random assignment in “real world” settings

Pragmatic or practical clinical trials (PCTs) ( Schwartz & Lellouch, 2009 ; Tunis, Stryer, & Clancy, 2003 ) are designed to inform practical decision-making in routine clinical settings, and can be contrasted with explanatory clinical trials, the focus of which is to identify treatment effects under controlled laboratory conditions. Because of their practical focus, PCTs are often designed as comparative effectiveness trials of alternative interventions. Inclusion criteria tend to be minimally restrictive, data are collected for a range of health outcomes rather than a narrow few, and implementation is tested in a variety of care settings ( Tunis et al., 2003 ).

PCTs and explanatory clinical trials are based on different paradigms and address distinct aims and objectives, some of which are well-suited to mixed methods approaches. Most importantly, in explanatory trials, contextual factors are usually considered confounders to be controlled, while the same factors are often considered integral components of implementation protocols in pragmatic trials. As an example, when comparing behavioral therapy versus medication for the treatment of adolescent depression, behavioral therapy invariably requires more contact between patient and provider. From the explanatory perspective, such a difference in the intensity of patient-provider contact is considered a confounding factor, and needs to be controlled in order to rule out the possibility that observed differences between therapy and medication patients are not a result of differences in the intensity of patient-provider contact. From the pragmatic perspective, however, the higher intensity of patient-provider contact is a natural component of the implementation of the therapy in its practical context ( Schwartz & Lellouch, 2009 ). The clinical decision that needs to be made for implementation is how the therapy “bundle,” including the imbedded higher intensity of patient-provider contact, differs from the medication “bundle,” including the imbedded lower intensity of patient-provider contact. Mixed methods approaches offer opportunities to study and describe contextual and other non-controlled factors at work in PCTs, and findings can be used to address implementation barriers.

Randomization in PCTs

Randomization can be extremely valuable in PCTs because it can be difficult to determine if differences are due to baseline differences in the groups that receive or do not receive an intervention, or whether the results can be attributed to the intervention ( Hotopf, 2002 ). For these reasons most PCTs include some form of randomization, though this can sometimes be difficult in clinical settings if randomization distorts routine care delivery or clinician-patient relationships, or if the intervention targets a vulnerable population with reservations about research participation. Irrespective of randomization designs, PCT researchers must balance and understand the effects of conducting a study, and collecting data, on the clinical settings in which they are working ( Thorpe et al., 2009 ) and the effects of those settings on intervention outcomes. Qualitative approaches have important applicability here, helping to identify barriers or facilitators of implementation, stakeholder perspectives, and adaptations that can increase the likelihood of success ( Luce et al., 2009 ; Oakley, Strange, Bonell, Allen, & Stephenson, 2006 ). Qualitative data collection can also be used to monitor the effects of the research enterprise on organizational functioning and clinical processes so that negative effects can be mitigated to the greatest extent possible or, for those that cannot be mitigated, carefully described. Such descriptions can provide invaluable information for decision makers considering intervention adoption and for researchers designing alternative approaches.

Parallel randomized and nonrandomized trial designs

In situations where a large proportion of eligible individuals decline randomization, external validity is threatened. Instead of excluding these candidates, it is possible to use designs in which participants are retained and entered into a separate nonrandomized trial based on their treatment preferences. In this case, addition of the nonrandomized trial data to the randomized trial data can enhance generalizability of results. Parallel randomized and nonrandomized trial designs have considerable potential because they take advantage of the stronger internal validity of the RCT and enhanced generalizability from the quasi-experimental trial. Qualitative data collection with participants who refuse randomization can shed light on factors affecting willingness to be randomized and determine how those factors might be related to trial outcomes.

Selection bias

Selection bias is a common challenge for implementation studies in which participants are allowed to self-select. Self-selection means that those receiving one intervention are likely to be different from those receiving the other intervention. For example, patients with severe conditions may be more likely to receive more intensive interventions, while patients with milder conditions may be more likely to receive less intensive interventions or no active intervention beyond “watch and monitor.” In such situations, direct comparisons of outcomes across intervention conditions may be misleading. Using qualitative data collection to understand self-selection may help researchers to better target interventions.

Propensity scores, the conditional probability of receiving a specific intervention given a set of observed covariates ( Rosenbaum, 2010 ; Rosenbaum & Rubin, 1983 ; Rosenbaum & Rubin, 1984 ) are a promising approach for addressing selection bias resulting from imbalances between intervention and comparison groups on observed covariates. These include as weighting, stratification, and matching ( Rosenbaum, 2010 ; Rosenbaum & Rubin, 1983 ; Rosenbaum & Rubin, 1984 ). One limitation of the approach is that propensity score methods can only be used to address overt bias, namely selection bias due to observed confounding factors. If hidden bias resulting from unobserved confounding factors is present, propensity score methods are limited. That is, they can be used to balance the observed covariates and any components of hidden bias that are correlated with observed covariates, but additional methodologies such as instrumental variable analysis ( Angrist, Imbens, & Rubin, 1996 ), and sensitivity analyses ( Rosenbaum, 2010 ; Rosenbaum & Rubin, 1983 ; Rosenbaum & Rubin, 1984 ) are needed to more fully address these problems. Qualitative assessments can be used uncover unobserved confounders and identify factors that might be measured for inclusion in propensity score calculations.

Design and Analysis for Multi-level Interventions

Mental health service delivery is often multi-level in nature, with clients nested within providers, providers nested within agencies or clinics, and agencies nested within county and state policies. A common design used for multi-level interventions is the group or cluster randomized design, with randomized assignment at the highest level of the intervention, most often the agency or clinic level. This approach has two significant limitations, however. First, the evaluation is subject to variance inflation at the agency level; second, there is no information that allows us to untangle the impact of the various components of the intervention targeted at each level, nor to assess whether the interventions at those levels interact ( Donner, 1998 ; Donner & Klar, 1994 ; Murray, 1998 ). Split plot designs present an alternative that addresses the limits of cluster randomized designs ( Fisher, 1925 ; Yates, 1935 ). These designs are particularly useful for state-level rollouts because they improve statistical efficiency and enable the unique contributions from interventions at each level to be disentangled. For example, agencies can be randomized to either receive an agency-level intervention or remain in usual care. Then, within agencies, providers are randomized to either receive a provider-level intervention, or remain in usual practice. Finally, within agencies and providers (with all combinations of agency and provider level interventions), consumers are randomized to either receive consumer-level interventions (e.g. engagement strategies), or remain in usual care. Combining the 3 phases of randomization, we can focus on main-effects analyses to separately assess the impacts of the three different intervention components. Under the assumption of additivity, each of the 3 intervention components can be estimated and tested using the entire sample, achieving full statistical efficiency. Moreover, each of the intervention effects is free from design effects (variance inflation) from the higher levels. Disadvantages to the split plot design include the need to have clearly defined interventions at each level, and adequate sample sizes. Mixed methods approaches to these designs typically include qualitative data collection for process and implementation evaluations to ensure understanding of critical factors affecting processes and outcomes at different levels. Such evaluations might include focus group interviews with consumers; individual or focus group interviews with clinicians, and key informant interviews with executive directors or other administrative staff. Participant observation can also be of great value in identifying and describing how processes play out at each level, and how they interact across levels.

Quantitative Approaches to Data Collection and Integration within Mixed Methods D & I Studies

Survey methods.

Survey methods are widely-used, cost-effective, methods of collecting large amounts of data that are representative of populations of interest. They can be particularly useful to D & I researchers conducting multi-level implementation projects, and often are developed and administered using mixed methods approaches ( Beatty & Willis, 2007 ; Fowler, 2009 ). Formative qualitative work may be used to identify key themes and constructs to be assessed in a survey and cognitive interviewing used to develop, refine and validate survey items ( Beatty & Willis, 2007 ). Surveys can also include open-ended questions that allow respondents to answer using their own words. When such mixed methods techniques are employed, a successful survey can be characterized as an integrated mixed methods approach that used qualitative methods to develop and ascertain the meaning of questions, quantitative methods to collect the structured data required for the study, and open-ended qualitative questions to explore areas that are not appropriate for close-ended responses or for which adequate information is not available to create fixed response categories.

Target Populations and Sample Selection

While most surveys target data collection from individual respondents in a specified population (e.g., clients served by an agency), many D & I projects also seek data at the agency or organizational level (e.g., health care facilities or business entities). In either case, researchers must define the population, specify how members will be identified and approached, and tailor questions to the population. For D & I in state systems, for example, respondents may include state policymakers, such as commissioners, deputy directors, or other executive leadership, organizational administrators, as well as clinicians, patients, and families. Because most projects cannot afford to administer surveys to the entire target population, sampling is necessary and the sampling strategy must allow population-level inferences. When the population is small (e.g., state policymakers) key informant or other individual interviews may be more useful and cost-effective than surveys. Whether for qualitative or quantitative approaches, sample selection methods depend on the research questions, the expected ranges of responses, and the mechanisms available for accessing members in the target population. A number of excellent resources exist for survey sampling approaches and methods ( Babbie, 1990 ; Fowler, 2009 ; Frankel et al., 1999 ; Kish, 1995 ; Marsden & Wright, 2010 ; Rossi, Wright, & Anderson, 1983 ). Similar resources are available for sampling in qualitative research ( Blankertz, 1998 ; Draucker, Martsolf, Ross, & Rusk, 2007 ; Morse, 2000 ; Palinkas et al., 2013 ; Strauss & Corbin, 1998 ).

Questionnaires

Survey methods are typically implemented using a questionnaire (or instrument) that includes a collection of questions inquiring about specific behaviors or attributes. A simple questionnaire presents the same list of questions sequentially, in the same order, to all respondents. More complex questionnaires can be constructed that are customized to present a set of questions selected according to the characteristics of the specific respondent (e.g., a survey about adolescent mental health services would skip questions about pregnancy for male respondents). Such use of branching logic is facilitated by information technology in administering surveys (e.g., computer assisted interviewing or CAI) ( Couper et al., 1998 ). Surveys can be conducted either in person, by telephone, using the web ( Couper, 2008 ), or via ecological momentary assessment (EMA) using mobile devices ( Shiffman, Stone, & Hufford, 2008 ).

The design of a good survey questionnaire usually follows a back-engineering approach, starting with the ultimate goal of data collection—the aims of the study and the hypotheses to be tested. Many experienced investigators begin their design process by drafting an outline of the final report and detailing how they will answer their fundamental analysis questions ( Scheuren, 2013 ). This pinpoints which pieces of information will be required and leads to construction of an analysis plan that connects data collection objectives to specific questions and specifies the ways questions should be asked ( Scheuren, 2013 ). Similar back-engineering is beneficial for qualitative questions, even if the research is exploratory and theory-generating. That is, development of the approach as well as materials, such as interview guides, should be clearly tied to the desired end-product, including expectations for how the approach and materials might change over time. The draft final report then helps the researcher identify the information needed to describe all study participants, includes a clear sampling and data analysis plan, details opportunities for evolutions in approach, and specifies the key questions that are to be answered.

Survey Administration

Surveys can be administered in various ways, including paper-and-pencil, computer-assisted personal interviews (CAPI), computer-assisted telephone interviews, web-based surveys, and surveys using mobile devices ( Couper, 2008 ; Couper et al., 1998 ; Shiffman et al., 2008 ). While interviewer-administered surveys provide a high level of accuracy and more complete data, self-administered surveys are less costly and can provide greater confidentiality and improved respondent comfort ( Tourangeau & Smith, 1996 ). Information technology-based approaches can increase accuracy and reduce human error, though they may require programming expertise and can be vulnerable to technology failures. Different modes of administration can be particularly useful in D & I research, with mode selected to optimize comfort for and response from the target population. Here too, qualitative data can provide information to researchers who are making decisions about which survey modalities are best for particular topics and participants.

Survey Modalities and Mode Effects

Using a combination of survey administration modes can optimize response rates while containing survey costs. For example, if formative work suggests that significant proportions of the target population are comfortable with self-administered web surveys, this approach might be attempted first, followed by interviewer-administered telephone surveys for non-respondents. A third mode might also be deployed if needed, with an interviewer traveling to the respondent to administer a face-to-face survey. When multiple modes of administration are combined, however, responses may vary across modes of administration. For example, participants may be more willing to accurately respond to sensitive questions in self-administered modes than in face-to-face modes ( Tourangeau & Smith, 1996 ). Such mode effects may require statistical adjustments ( de Leeuw, 2005 ; Fowler, Jr. et al., 2002 ) or alternatively, the use of random response techniques ( Lensvelt-Mulders, Hox, van der Heijden, & Maas, 2005 ) to improve response validity

Measurement Development for Dissemination and Implementation Research

Researchers are increasingly developing more rigorous methods of measurement development, and taking advantages of technological advances that make better measurement possible and less burdensome on participants. Such methods have not yet been widely used in D & I research, but their benefits, particularly as common outcome metrics are developed, suggest significant opportunities for application in this area. For example, in surveying agencies in a dissemination/implementation program, the methods described below can be used to customize questions for specific agencies or service users so that provide the most informative information for each unique situation, reduce respondent burden, and avoid the pitfalls of “one size fits all” approaches.

Item Response Theory (IRT)

Classical and IRT measurement methods differ dramatically in approach to administration and scoring. For example, consider a track and field meet in which athletes participate in a hurdles race and in high jump. Suppose that the hurdles are not all the same height and the score is determined by the runner’s time and the number of hurdles cleared. For the high jump, the cross bar is raised incrementally and athletes try to jump over the bar without dislodging it. The first of these two events is like a traditionally scored objective test: runners attempt to clear hurdles of varying heights, analogous to answering questions of varying difficulty. In either case, a specific counting operation measures ability to clear hurdles or answer questions. On the high jump, ability is measured by the highest position the athlete clears. IRT measurement uses the same logic as the high jump: Items are arranged on a continuum with fixed points of increasing difficulty of endorsement. Scores are measured by the location on the continuum of the most difficult item endorsed. In IRT, scores are obtained using a scale point rather than a count.

These methods of scoring hurdles and high jump, or their analogues in traditional and IRT measures, contrast sharply: If hurdles are arbitrarily added or removed, number of hurdles cleared cannot be compared across races run with different hurdles or different numbers of hurdles. Scores lose their comparability if item composition is changed. The same is not true, however, of the high jump or of IRT scoring. If one of the positions on the bar were omitted, height cleared is unchanged and only the precision of the measurement at that point on the scale is affected. Thus, in IRT scoring, a certain number of items can be arbitrarily added, deleted or replaced without losing comparability of scores, thus reducing participant burden and costs of administration. This property of scaled measurement, compared with counts, is the most salient advantage of IRT over classical measurement.

Computerized adaptive testing (CAT) can be used to develop banks of items for specific populations, with a range of endorsement difficulties ( Weiss, 1985 ), for use in IRT-based outcomes measurement. Cognitive interviewing and other qualitative approaches can be used to understand participants’ experiences of endorsement difficulty for particular items, as well as factors associated with difficulty of endorsement. Once item banks are available, they can be used to build complex surveys that adapt to individual participants’ characteristics and response patterns ( Gibbons et al., 2013 ). While use of CAT and IRT has been widespread in educational measurement, it has been less widely used in D & I research. In addition to cognitive interviewing, qualitative methods such focus groups and as concept mapping can be used to inform the item development necessary to use IRT approaches in D & I research.

Vertical Scaling

Vertical or developmental scaling is an IRT method frequently used in educational assessments to provide a single scale that is applicable across all grade levels so that growth in learning can be measured with a common yardstick ( Tong & Kolen, 2007 ). In the measurement of child outcomes following a D & I project, items that may be appropriate for a 14- or 15-year-old may not be appropriate for a 9- or 10- year-old. As long as there is a subset of common “anchor” items that can be used for adjacent developmental (age) groups, IRT-based vertical scaling can be used to provide a common assessment across different developmental levels. These techniques can be used to deliver lower-cost, less burdensome, outcome measures that can be compared across similar D & I projects.

Summary & Conclusions

Mixed methods approaches to D & I research hold great promise for unpacking the processes and factors that are often hidden within the black boxes that have been the hallmark of evidence-based practice implementation. A multitude of qualitative techniques are available to meet the needs of D & I researchers, ranging from traditional ethnographic techniques to rapid ethnographic assessment, and from purely observational techniques to hybrid designs that inherently combine both qualitative and quantitative methods. Conventional survey methods have their place as well, but newer technologies, combined with improvements in the underpinnings of measurement theory, make possible a new generation of more valid and less burdensome assessment processes. Together, the methods described in this paper provide a set of approaches that could be considered a toolkit for mixed methods D & I research.

Such a toolkit has particularly important application in multi-level state-related policy research that involves scaling up of evidence-based practices. These methods are useful for comparing the different perspectives of the various stakeholders and constituents—ranging from policy-makers to agency directors and management; from front-line clinical staff to patients and families—and for developing clear understandings of implementation successes and failures. Mixed methods provide the opportunity to produce enriched understandings of the complexities of implementation processes, and to tap into the nuances of vexing barriers and promising facilitators of implementation. Together, they provide necessary methods for improving strategies for effective, efficient, and sustainable roll-outs of evidence-based practices.

Acknowledgments

This work was funded by an award from the National Institute of Mental Health (P30-MH090322; K. Hoagwood, PI)

1 For analyzing and reporting qualitative and mixed methods data, see Miles and Huberman ( Miles & Huberman, 1994 ); Creswell ( Creswell, 1998 ); Creswell and Plano Clark ( Creswell & Plano Clark, 2007 ); Denzin and Lincoln ( Denzin & Lincoln, 1998 ; Denzin & Lincoln, 2005 ); Bernard ( Bernard, 2011 ); and Bourgeault et al. ( Bourgeault, Dingwall, & de Vries, 2010 ). For focus group interviews, see Morgan and Krueger ( Morgan & Krueger, 1998 ). Those interested in grounded theory and constant comparative analyses should refer to: Charmaz ( Charmaz, 2001 ; Charmaz, 2006 ; Creswell, 1998 ; Glaser & Strauss, 1967 ). See ( Hsieh & Shannon, 2005 ; Krippendorff, 2004 ) for detailed explanations of content analysis. For discussions of rigor and threats to validity in qualitative research, including of reliability, validity, and trustworthiness, see ( Davies & Dodd, 2002 ; Krefting, 1991 ; Morse, Barrett, Mayan, Olson, & Spiers, 2002 ; Poland, 1995 ; Whittemore, Chase, & Mandle, 2001 ).

Contributor Information

Carla A. Green, Center for Health, Kaiser Permanente Northwest.

Naihua Duan, Professor Emeritus, Columbia University Medical Center.

Robert D. Gibbons, Professor of Medicine & Health Studies, Director, Center for Health Statistics, University of Chicago.

Kimberly E. Hoagwood, Cathy and Stephen Graham Professor of Child and Adolescent Psychiatry, Department of Child and Adolescent Psychiatry, New York University Langone Medical Center.

Lawrence A. Palinkas, Albert G. and Frances Lomas Feldman Professor of Social Policy and Health, School of Social Work, University of Southern California.

Jennifer P. Wisdom, Associate Vice President for Research, George Washington University.

  • Aarons GA, Miller EA, Green AE, Perrott JA, Bradway R. Adaptation happens: a qualitative case study of implementation of The Incredible Years evidence-based parent training programme in a residential substance abuse treatment programme. Journal of Children’s Services. 2012; 7 (4):233–245. [ Google Scholar ]
  • Aarons GA, Wells RS, Zagursky K, Fettes DL, Palinkas LA. Implementing evidence-based practice in community mental health agencies: a multiple stakeholder analysis. American Journal of Public Health. 2009; 99 (11):2087–2095. doi:AJPH.2009.161711 [pii];10.2105/AJPH.2009.161711 [doi] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Adler PA, Adler P. Observational techniques. In: Denzin NK, Lincoln YS, editors. Collecting and Interpreting Qualitative Materials. Thousand Oaks, California: Sage Publications; 1998. pp. 79–109. [ Google Scholar ]
  • Angrist JD, Imbens GW, Rubin DB. Indentification of causal e!ects using instrumental Variable. Journal of the American Statistical Association. 1996; 91 :444–455. [ Google Scholar ]
  • Angrosino MV. Recontextualizing observation: Ethnography, pedagogy, adn the prospects for a progessive political agenda. In: Denzin NK, Lincoln YS, editors. Handbook of Qualitative Research. 3rd ed. Thousand Oaks, CA: Sage Publications; 2005. pp. 729–745. [ Google Scholar ]
  • Arrington B, Kimmey J, Brewster M, Bentley J, Kane M, Van BC, et al. Building a local agenda for dissemination of research into practice. J Public Health Manag Pract. 2008; 14 (2):185–192. doi:10.1097/01.PHH.0000311898.03573.28 [doi];00124784-200803000-00017 [pii] [ PubMed ] [ Google Scholar ]
  • Babbie E. Survey research methods. 2nd Edition ed. Belmont, CA: Wadsworth; 1990. [ Google Scholar ]
  • Beatty PC, Willis GB. Research Synthesis: The Practice of Cognitive Interviewing. Public Opinion Quarterly. 2007; 71 (2):287–311. [ Google Scholar ]
  • Beebe J. Rapid assessment process: An introduction. Lanham, MD: AltaMira Press; 2001. [ Google Scholar ]
  • Behar LB, Hydaker WM. Defining community readiness for the implementation of a system of care. Administration and Policy in Mental Health. 2009; 36 (6):381–392. doi:10.1007/s10488-009-0227-x [doi] [ PubMed ] [ Google Scholar ]
  • Bernard HR. Research methods in anthropology: Qualitative and quantitative approaches. Walnut Creek, CA: Alta Mira Press; 2002. [ Google Scholar ]
  • Bernard HR. Research methods in anthropology: Qualitative and quantitative approaches. 5th ed. Lanham, MD: AltaMira Press; 2011. [ Google Scholar ]
  • Berwick DM. Disseminating innovations in health care. The Journal of the American Medical Association. 2003; 289 (15):1969–1975. doi:10.1001/jama.289.15.1969 [doi];289/15/1969 [pii] [ PubMed ] [ Google Scholar ]
  • Blankertz L. The value and practicality of deliberate sampling for heterogeneity: A critical multiplist perspective. American Journal of Evaluation. 1998; 19 (3):307–324. [ Google Scholar ]
  • Bourgeault I, Dingwall R, de Vries R. Handbook of Qualitative Methods in Health Research. Los Angeles: Sage; 2010. [ Google Scholar ]
  • Caronna CA. Why use qualitative methods to study health care organizations? Insights from multi-level case studies. In: Bourgeault I, Dingwall R, de Vries R, editors. Handbook of Qualitative Methods in Health Research. Los Angeles: Sage Publications; 2010. pp. 71–87. [ Google Scholar ]
  • Charmaz K. Qualitative interviewing and grounded theory analysis. In: Gubrium JF, Hutchinson S, editors. Handbook of Interviewing. Thousand Oaks, CA: Sage Publications; 2001. [ Google Scholar ]
  • Charmaz K. Constructing grounded theory. Thousand Oaks, CA: Sage Publications, Inc; 2006. [ Google Scholar ]
  • Chase SE. Narrative inquiry: Multiple lenses, approaches, voices. In: Denzin NK, Lincoln YS, editors. Handbook of Qualitative Research. 3rd ed. Thousand Oaks, CA: Sage Publications; 2005. pp. 651–679. [ Google Scholar ]
  • Concept Systems. The concept system, version 4.118. Ithaca, NY: Concept Systems Incorporated; 2006. Retrieved from: http://www.conceptsystems.com . [ Google Scholar ]
  • Cook TD, Campbell DT. Quasi-experimentation : design & analysis issues for field settings. Boston: Houghton Mifflin; 1979. [ Google Scholar ]
  • Corsaro WA, Heise DR. Event structure models from ethnographic data. In: Clogg CC, editor. Sociological Methodology: 1990. Cambridge, MA: Basil Blackwell; 1990. pp. 1–57. [ Google Scholar ]
  • Couper MP. Designing Effective Web Surveys. Cambridge: University Press; 2008. [ Google Scholar ]
  • Couper MP, Baker R, Bethlehem J, Clark C, Martin J, Nicholls WI, et al. Computer-assisted survey information collection. New York: John Wiley; 1998. [ Google Scholar ]
  • Creswell JW. Qualitative inquiry and research design: Choosing among five traditions. Thousand Oaks, CA: Sage Publications; 1998. [ Google Scholar ]
  • Creswell JW, Klassen AC, Plano Clark VL, Smith KC f. t. O. o. B. a. S. S. R. Best practices for mixed methods research in the health sciences. 2011 http://obssr.od.nih.gov/mixed_methods_research .
  • Creswell JW, Plano Clark VL. Designing and Conducting Mixed Methods Research. Thousand Oaks, CA: Sage Publications, Inc; 2007. [ Google Scholar ]
  • Davies D, Dodd J. Qualitative research and the question of rigor. Qualitative Health Research. 2002; 12 (2):279–289. Retrieved from PM:11837376. [ PubMed ] [ Google Scholar ]
  • de Leeuw ED. To Mix or Not to Mix Data Collection Modes in Surveys. Journal of Official Statistics. 2005; 2 :233–255. [ Google Scholar ]
  • Denzin NK, Lincoln YS. Collecting and interpreting qualitative materials. Thousand Oaks, CA: Sage Publications; 1998. [ Google Scholar ]
  • Denzin NK, Lincoln YS. Handbook of Qualitative Research. 3rd ed. Thousand Oaks, CA: Sage Publications; 2005. [ Google Scholar ]
  • Dicicco-Bloom B, Crabtree BF. The qualitative research interview. Med Educ. 2006; 40 (4):314–321. doi:MED2418 [pii];10.1111/j.1365-2929.2006.02418.x [doi] [ PubMed ] [ Google Scholar ]
  • Donner A. Some aspects of the design and analysis of cluster randomization trials. Journal of the Royal Statistical Society: Series C (Applied Statistics) 1998; 47 (1):95–113. [ Google Scholar ]
  • Donner A, Klar N. Cluster randomization trials in epidemiology: theory and application. Journal of Statistical Planning and Inference. 1994; 42 (1):37–56. [ Google Scholar ]
  • Draucker CB, Martsolf DS, Ross R, Rusk TB. Theoretical sampling and category development in grounded theory. Qualitative Health Research. 2007; 17 (8):1137–1148. Retrieved from PM:17928484. [ PubMed ] [ Google Scholar ]
  • Finkelstein MO, Levin B, Robbins H. Clinical and prophylactic trials with assured new treatment for those at greater risk: I. A design proposal. American Journal of Public Health. 1996a; 86 (5):691–695. Retrieved from PM:8629721. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Finkelstein MO, Levin B, Robbins H. Clinical and prophylactic trials with assured new treatment for those at greater risk: II. Examples. American Journal of Public Health. 1996b; 86 (5):696–705. Retrieved from PM:8629722. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Fisher RA. Statistical methods for research workers. 14 ed. Edinburgh: Oliver and Boyd; 1925. [ Google Scholar ]
  • Ford EW, Duncan WJ, Ginter PM. Health departments’ implementation of public health’s core functions: an assessment of health impacts. Public Health. 2005; 119 (1):11–21. doi:S0033350604000575 [pii];10.1016/j.puhe.2004.03.002 [doi] [ PubMed ] [ Google Scholar ]
  • Fowler FJ. Survey research methods. Applied Social Research Methods Series. 4th Edition ed. Thousand Oaks, CA: Sage Publications; 2009. [ Google Scholar ]
  • Fowler FJ, Jr, Gallagher PM, Stringfellow VL, Zaslavsky AM, Thompson JW, Cleary PD. Using telephone interviews to reduce nonresponse bias to mail surveys of health plan members. Medical Care. 2002; 40 (3):190–200. Retrieved from PM:11880792. [ PubMed ] [ Google Scholar ]
  • Frankel MR, Shapiro MF, Duan N, Morton SC, Berry SH, Brown JA, et al. National probability samples in studies of low-prevalence diseases. Part II: Designing and implementing the HIV cost and services utilization study sample. Health Services Research. 1999; 34 (5 Pt 1):969–992. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Gabbay J, le May A. Evidence based guidelines or collectively constructed “mindlines?” Ethnographic study of knowledge management in primary care. BMJ. 2004; 329 (7473):1013. doi:329/7473/1013 [pii];10.1136/bmj.329.7473.1013 [doi]. Retrieved from PM:15514347. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Gibbons RD, Weiss DJ, Pilkonis PA, Frank E, Moore T, Kim JB, et al. The CAT-DI: a computerized adaptive test for depression. Archives of General Psychiatry. 2013 in press. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Gilchrist VJ, Williams RL. Key informant interviews. In: Crabtree BF, Miller WL, editors. Doing qualitative research. Second Edition ed. Thousand Oaks, CA: Sage Publications; 1999. pp. 71–88. [ Google Scholar ]
  • Glaser BG, Strauss AL. The discovery of grounded theory: Strategies for qualitative research. Chicago: Aldine Publishing Company; 1967. [ Google Scholar ]
  • Griffin LJ, Korstad RR. Historical inference and event-structure analysis. International Review of Social History. 1998; 43 (Supplement S6):145–165. Retrieved from http://dx.doi.org/10.1017/S0020859000115135 . [ Google Scholar ]
  • Heise D. Ethno. 2012 Retrieved from http://www.indiana.edu/~socpsy/ESA/
  • Heise DR. Modeling event structures. The Journal of Mathematical Sociology. 1989; 14 (2–3):139–169. Retrieved from http://dx.doi.org/10.1080/0022250X.1989.9990048 . [ Google Scholar ]
  • Hotopf M. The pragmatic randomised controlled trial. Advances in Psychiatric Treatment. 2002; 8 (5):326–333. [ Google Scholar ]
  • Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qualitative Health Research. 2005; 15 (9):1277–1288. doi:15/9/1277 [pii];10.1177/1049732305276687 [doi] [ PubMed ] [ Google Scholar ]
  • Imbens GW, Lemieux T. Regression discontinuity designs: A guide to practice. Journal of Econometrics. 2008; 142 (2):615–635. [ Google Scholar ]
  • Jabbar AM, Abelson J. Development of a framework for effective community engagement in Ontario, Canada. Health Policy. 2011; 101 (1):59–69. doi:S0168-8510(10)00258-7 [pii];10.1016/j.healthpol.2010.08.024 [doi] [ PubMed ] [ Google Scholar ]
  • Kamberelis G, Dimitriadis G. Focus groups: Strategic Articulations of Pedagogy, Politics, and Inquiry. In: Denzin NK, Lincoln YS, editors. The Sage Handbook of Qualitative Research. 3rd Edition ed. Thousand Oaks, CA: Sage Publications, Inc; 2005. [ Google Scholar ]
  • Kish L. Survey Sampling. New York: John Wiley & Sons, Inc; 1995. Wiley Classics Library. [ Google Scholar ]
  • Krefting L. Rigor in qualitative research: the assessment of trustworthiness. Am J Occup Ther. 1991; 45 (3):214–222. [ PubMed ] [ Google Scholar ]
  • Krippendorff K. Content Analysis: An Introduction to Methodology. Thousand Oaks, CA: Sage Publications, Inc; 2004. [ Google Scholar ]
  • Lee DS, Lemieux T. Regression Discontinuity Designs in Economics. Journal of Economic Literature. 2010; 48 (2):281–355. Retrieved from http://www.aeaweb.org/articles.php?doi=10.1257/jel.48.2.281 . [ Google Scholar ]
  • Lensvelt-Mulders GJLM, Hox JJ, van der Heijden PGM, Maas CJM. Meta-analysis of randomized response research: Thirty-five years of validation. Sociological Methods & Research. 2005; 33 (3):319–348. [ Google Scholar ]
  • Luce BR, Kramer JM, Goodman SN, Connor JT, Tunis S, Whicher D, et al. Rethinking randomized clinical trials for comparative effectiveness research: the need for transformational change. Annals of Internal Medicine. 2009; 151 (3):206–209. doi:0000605-200908040-00126 [pii]. Retrieved from PM:19567619. [ PubMed ] [ Google Scholar ]
  • Marsden PV, Wright JD. Handbook of survey research. 2nd Edition ed. Bingley, UK: Emerald Group Publishing Limited; 2010. [ Google Scholar ]
  • Marshall MN. The key informant technique. Family Practice. 1996; 13 (1):92–97. [ PubMed ] [ Google Scholar ]
  • McNall M, Foster-Fishman PG. Methods of rapid evaluation, assessment, and appraisal. American Journal of Evaluation. 2007; 28 (2):151–168. [ Google Scholar ]
  • Miles MB, Huberman AM. Qualitative data analysis. 2 ed. Thousand Oaks: Sage Publications; 1994. [ Google Scholar ]
  • Miller WL, Crabtree BF. Doing qualiative research. Thousand Oaks, CA: Sage Publications; 1999. Depth interviewing; pp. 123–201. [ Google Scholar ]
  • Morgan DL. Successful Focus Groups: Advancing the State of the Art. Newbury Park, California: Sage Publications; 1993. [ Google Scholar ]
  • Morgan DL, Krueger RA. The focus group kit. Thousand Oaks, CA: Sage Publications; 1998. [ Google Scholar ]
  • Morse JM. Determining sample size. Qualitative Health Research. 2000; 10 (1):3–5. [ PubMed ] [ Google Scholar ]
  • Morse JM, Barrett M, Mayan M, Olson K, Spiers J. Verification strategies for establishing reliability and validity in qualitative research. [Access on December 30, 2008]; International Journal of Qualitative Methods. 2002 1 (2) from http://www.ualberta.ca/~iiqm/backissues/1_2Final/html/morse.html Retrieved from http://www.ualberta.ca/~iiqm/backissues/1_2Final/html/morse.html . [ Google Scholar ]
  • Murray DM. Design and analysis of group-randomized trials. 29 ed. Oxford University Press; 1998. [ Google Scholar ]
  • Murray SA, Tapson J, Turnbull L, McCallum J, Little A. Listening to local voices: adapting rapid appraisal to assess health and social needs in general practice. BMJ. 1994; 308 (6930):698–700. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Needle RH, Trotter RT, Singer M, Bates C, Page JB, Metzger D, et al. Rapid assessment of the HIV/AIDS crisis in racial and ethnic minority communities: an approach for timely community interventions. American Journal of Public Health. 2003; 93 (6):970–979. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Oakley A, Strange V, Bonell C, Allen E, Stephenson J. Process evaluation in randomised controlled trials of complex interventions. BMJ. 2006; 332 (7538):413–416. doi:332/7538/413 [pii];10.1136/bmj.332.7538.413 [doi]. Retrieved from PM:16484270. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed method designs in implementation research. Administration and Policy in Mental Health. 2011a; 38 (1):44–53. doi:10.1007/s10488-010-0314-z [doi] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed method designs in implementation research. Administration and Policy in Mental Health. 2011b; 38 (1):44–53. Retrieved from PM:20967495. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Palinkas LA, Horwitz SM, Chamberlain P, Hurlburt MS, Landsverk J. Mixed-methods designs in mental health services research: a review. Psychiatric Services. 2011; 62 (3):255–263. doi:62/3/255 [pii];10.1176/appi.ps.62.3.255 [doi] [ PubMed ] [ Google Scholar ]
  • Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration and Policy in Mental Health. 2013 Retrieved from PM:24193818. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Pentland BT. Building process theory with narrative: From description to explanation. Academy of Management Review. 1999; 24 (4):711–724. Retrieved from http://amr.aom.org/content/24/4/711.abstract . [ Google Scholar ]
  • Poland BD. Transcription quality as an aspect of rigor in qualitative research. Qualitative Inquiry. 1995; 1 (3):290–310. [ Google Scholar ]
  • Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health. 2009; 36 (1):24–34. doi:10.1007/s10488-008-0197-4 [doi] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Ragin CC. Turning the Tables: How Case-Oriented Research Challenges Variable-Oriented Research. Comparative Social Research. 1997; 16 :27–42. [ Google Scholar ]
  • Ragin CC. The distinctiveness of case-oriented research. Health Services Research. 1999a; 34 (5 Pt 2):1137–1151. Retrieved from PM:10591277. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Ragin CC. Using qualitative comparative analysis to study causal complexity. Health Services Research. 1999b; 34 (5 Pt 2):1225–1239. Retrieved from PM:10591281. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Ragin CC, Shulman D, Weinberg A, Gran B. Complexity, generality, and Qualitative Comparative Analysis. Field Methods. 2003; 15 :323–340. [ Google Scholar ]
  • Rosenbaum PR. Observational Studies. 2nd Edition. Springer Series in Statistics ed.; 2010. [ Google Scholar ]
  • Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983; 70 (1):41–55. Retrieved from http://biomet.oxfordjournals.org/content/70/1/41.abstract . [ Google Scholar ]
  • Rosenbaum PR, Rubin DB. Reducing bias in observational studies using subclassification on the propensity score. Journal of the American Statistical Association. 1984; 79 (387):516–524. [ Google Scholar ]
  • Rossi PH, Wright JD, Anderson AB. Handbook of survey research: Quantitative studies in social relations. First ed. Waltham, M: Academic Press; 1983. [ Google Scholar ]
  • Scheuren F. Chapter 6, Designing a Questionnaire. 2013 https://www.whatisasurvey.info/downloads/pamphlet_current.pdf . Retrieved from https://www.whatisasurvey.info/downloads/pamphlet_current.pdf .
  • Schwartz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutical trials. Journal of Clinical Epidemiology. 2009; 62 (5):499–505. doi:S0895-4356(09)00043-2 [pii];10.1016/j.jclinepi.2009.01.012 [doi]. Retrieved from PM:19348976. [ PubMed ] [ Google Scholar ]
  • Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin; 2002. [ Google Scholar ]
  • Shiffman S, Stone AA, Hufford MR. Ecological momentary assessment. Annual Review of Clinical Psychology. 2008; 4 :1–32. [ PubMed ] [ Google Scholar ]
  • Shortell SM. The emergence of qualitative methods in health services research. Health Services Research. 1999; 34 (5 Pt 2):1083–1090. Retrieved from PM:10591274. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Stake RE. Qualitative case studies. In: Denzin NK, Lincoln YS, editors. Handbook of Qualitative Research. Thousand Oaks, California: Sage Publications; 2005. pp. 443–466. [ Google Scholar ]
  • Stetler CB, Legro MW, Wallace CM, Bowman C, Guihan M, Hagedorn H, et al. The role of formative evaluation in implementation research and the QUERI experience. Journal of General Internal Medicine. 2006; 21 (Suppl 2):S1–S8. Retrieved from PM:16637954. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Stevenson WB, Greenberg DN. The formal analysis of narratives of organizational change. Journal of Management. 1998; 24 (6):741–762. Retrieved from http://jom.sagepub.com/content/24/6/741.abstract . [ Google Scholar ]
  • Strauss AL, Corbin J. Basics of qualitative research: Techniques and procedures for developing grounded theory. Thousand Oaks, CA: SAGE Publications, Inc; 1998. Theoretical sampling; pp. 201–215. [ Google Scholar ]
  • Thistlethwaite D, Campbell D. Regression-discontinuity analysis: an alternative to the ex post facto experiment. Journal of Educational Psychology. 1960; 51 :309–317. [ Google Scholar ]
  • Thorpe KE, Zwarenstein M, Oxman AD, Treweek S, Furberg CD, Altman DG, et al. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. CMAJ: Canadian Medical Association Journal. 2009; 180 (10):E47–E57. Retrieved from PM:19372436. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Tong Y, Kolen MJ. Comparisons of Methodologies and Results in Vertical Scaling for Educational Achievement Tests. Applied Measurement in Education. 2007; 20 (2):227–253. Retrieved from http://dx.doi.org/10.1080/08957340701301207 . [ Google Scholar ]
  • Torrey WC, Bond GR, McHugo GJ, Swain K. Evidence-based practice implementation in community mental health settings: the relative importance of key domains of implementation activity. Administration and Policy in Mental Health. 2012; 39 (5):353–364. doi:10.1007/s10488-011-0357-9 [doi] [ PubMed ] [ Google Scholar ]
  • Tourangeau R, Smith TW. Asking sensitive questions: The impact of data collection mode, question format, and question context. Public Opinion Quarterly. 1996; 60 (2):275–304. Retrieved from http://poq.oxfordjournals.org/content/60/2/275.abstract . [ Google Scholar ]
  • Tremblay MA. The key informant technique: A nonethnographic application. American Anthropologist. 1957; 59 (4):688–701. Retrieved from http://dx.doi.org/10.1525/aa.1957.59.4.02a00100 . [ Google Scholar ]
  • Trochim WM. Introduction to concept mapping for planning and evaluation. Evaluation and Program Planning. 1989; 12 :1–16. [ Google Scholar ]
  • Trotter RT, Needle RH, Goosby E, Bates C, Singer M. A methodological model for rapid assessment, response, and evaluation: The RARE program in public health. Field Methods. 2001; 13 (2):137–159. [ Google Scholar ]
  • Trumpy AJ. Subject to negotiation: The mechanisms behind co-optation and corporate reform. Social Problems. 2008; 55 :519–536. [ Google Scholar ]
  • Tunis SR, Stryer DB, Clancy CM. Practical clinical trials: Increasing the value of clinical research for decision making in clinical and health policy. The Journal of the American Medical Association. 2003; 290 (12):1624–1632. Retrieved from PM:14506122. [ PubMed ] [ Google Scholar ]
  • Weiss DJ. Adaptive testing by computer. Journal of Consulting and Clinical Psychology. 1985; 53 (6):774–789. [ PubMed ] [ Google Scholar ]
  • Weller SC, Romeny AK. Systematic data collection. 12 ed. Newbury Park: Sage; 1988. [ Google Scholar ]
  • West SG, Duan N, Pequegnat W, Gaist P, Des Jarlais DC, Holtgrave D, et al. Alternatives to the randomized controlled trial. American Journal of Public Health. 2008; 98 (8):1359–1366. Retrieved from PM:18556609. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Westfall JM, Mold J, Fagnan L. Practice-based research--”Blue Highways” on the NIH roadmap. The Journal of the American Medical Association. 2007; 297 (4):403–406. doi:297/4/403 [pii];10.1001/jama.297.4.403 [doi] [ PubMed ] [ Google Scholar ]
  • Whittemore R, Chase SK, Mandle CL. Validity in qualitative research. Qualitative Health Research. 2001; 11 (4):522–537. Retrieved from PM:11521609. [ PubMed ] [ Google Scholar ]
  • Yates F. Complex experiments, with discussion. Supplement to the Journal of the Royal Statistical Society, Series B 2. 1935; 2 (2):181–247. [ Google Scholar ]
  • Yin RK. Enhancing the quality of case studies in health services research. Health Services Research. 1999; 34 (5 Pt 2):1209–1224. Retrieved from PM:10591280. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Yin RK. Applications of case study research. Applied Social Research Methods Series. 2nd ed. Thousand Oaks, CA: Sage Publications, Inc; 2003a. [ Google Scholar ]
  • Yin RK. Case study research: Design and methods. Applied Social Research Methods Series. 3rd Edition. Thousand Oaks, CA: Sage Publications; 2003b. [ Google Scholar ]

IMAGES

  1. Mixed Method Research Design

    dissertation mixed method

  2. Graphical illustration of the dissertation's mixed-method approach

    dissertation mixed method

  3. Figure 3 from Mixed methods: a research design for management doctoral

    dissertation mixed method

  4. Graphical illustration of the dissertation's mixed-method approach

    dissertation mixed method

  5. Mixed methods used in the dissertation

    dissertation mixed method

  6. Dissertation Template for Use With Mixed Methods Designs

    dissertation mixed method

VIDEO

  1. Exploring Mixed Methods Research Designs: Types and Applications

  2. What are the pros and Cons of Using Mixed Methods in Research

  3. How to Write an Abstract for a Paper or Thesis?

  4. Chapter 2 Methodology

  5. Dr. John W. Cresswell- Using a Simplified Model of Mixed Methods Research for Conducting a Study

  6. 23

COMMENTS

  1. PDF A Sample Mixed Methods Dissertation Proposal

    A Sample Mixed Methods Dissertation Proposal Prepared by Nataliya V. Ivankova NOTE: This proposal is included in the ancillary materials of Research Design with permission of the author. If you would like to learn more about this research project, you can examine the following publications that have resulted from this work:

  2. Ten steps to producing a successful mixed methods dissertation in

    2) Mixed methods isn't a game of two halves. Margaret explains that a common mistake is to think of mixed methods studies as having to have two components. In fact, they have three: in addition to the quantitative and qualitative strands, successful dissertations will pull these together to provide insight greater than the sum of the parts.

  3. Mixed Methods Research

    Mixed methods research combines elements of quantitative research and qualitative research in order to answer your research question. Mixed methods can help you gain a more complete picture than a standalone quantitative or qualitative study, as it integrates benefits of both methods. Mixed methods research is often used in the behavioral ...

  4. Mixed Methods Research

    Mixed methods research is an approach that combines both quantitative and qualitative forms. It involves philosophical assumptions, and the mixing of qualitative and quantitative approaches in tandem so that the overall strength of a study is greater than either qualitative or quantitative methods (Creswell, 2007).

  5. How to Construct a Mixed Methods Research Design

    When designing a mixed methods study, it is usually helpful to include the word "concurrent" ("parallel") or "sequential" ("sequenziell") in the title of the study design; a complex design can be partially concurrent and partially sequential. Timing has two aspects: simultaneity and dependence (Guest ).

  6. (PDF) Mixed Methods Research Guide With Examples

    Mixed methods research is a powerful tool that can be used to answer complex. research questions in a way that neither quantitative nor qualitative research can do. alone: 1. Researchers could ...

  7. PDF UNDERSTANDING MIXED METHODS RESEARCH

    (Creswell, 1994); and "mixed methodology," which acknowledges that it is both a method and a philosophical worldview (Tashakkori & Teddlie, 1998). Today, the most frequently used name is "mixed methods research," a name associated with the recent Handbook of Mixed Methods in Social and Behavioral Research (Tashakkori & Teddlie, 2003a ...

  8. Mixed Methods Research Guide With Examples

    A mixed methods research design is an approach to collecting and analyzing both qualitative and quantitative data in a single study. Mixed methods designs allow for method flexibility and can provide differing and even conflicting results. Examples of mixed methods research designs include convergent parallel, explanatory sequential, and exploratory sequential.

  9. Designing a PhD Proposal in Mixed Method Research

    Hershey, PA: IG Global. Chapter 3. DESIGNING PHD PROPOSAL IN MIXED. METHOD RESEARCH. Ndungi wa Mungai, PhD (Charles Sturt University, Australia) ABSTRACT. This chapter reviews the challenges and ...

  10. PDF CHOOSING A MIXED METHODS DESIGN

    mixed methods designs found in the literature. These classifications repre-sent different disciplines, and they use different terminology. Researchers should be aware of the range of mixed methods design types, as well as the discipline-based discussions of mixed methods designs. Methodologists writing about mixed methods research have devoted a

  11. The Growing Importance of Mixed-Methods Research in Health

    The relevance of mixed-methods in health research. The overall goal of the mixed-methods research design is to provide a better and deeper understanding, by providing a fuller picture that can enhance description and understanding of the phenomena [].Mixed-methods research has become popular because it uses quantitative and qualitative data in one single study which provides stronger inference ...

  12. University Library: Dissertation Guide Readings: Mixed-Method

    Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences. Sage. Sage. Note: We do our best to ensure these citations are correct within our system's constraints.

  13. PDF SUGGESTED DISSERTATION OUTLINE

    Mixed-methods research combines both quantitative and qualitative approaches, as is common in case studies, surveys and action research. There are no separate guidelines below for mixed methods. Dissertations using those methods will usually benefit from both the guidelines for quantitative research and those for qualitative research.

  14. PDF CHAPTER 3: METHODOLOGY

    The mixed methods approach has been used as a means of triangulation in order for quantitative and qualitative approaches to help validate each other. It is a strategy to provide a wider evidence base (Baker, 54 1999) and "improve[s] the validity and reliability of research or evaluation of findings" (Golafshani, 2003, p. 603). ...

  15. PDF Mixed-Methods Research: A Discussion on its Types, Challenges, and

    A mixed-methods approach is a research methodology in its own right. As stated by Creswell and Plano Clark (2011), a mixed-methods research design is a research design that has its own philosophical assumptions and methods of inquiry. As a methodology, it includes philosophical assumptions to provide directions for the collection and analysis ...

  16. Combining qualitative and quantitative research within mixed method

    Mixed methods research can be viewed as an approach which draws upon the strengths and perspectives of each method, recognising the existence and importance of the physical, natural world as well as the importance of reality and influence of human experience (Johnson and Onquegbuzie, 2004). Rather than continue these debates in this paper, we ...

  17. PDF A Mixed-method Approach to Investigate Individual Behaviour in Online

    A MIXED-METHOD APPROACH TO INVESTIGATE INDIVIDUAL BEHAVIOUR IN ONLINE HEALTH COMMUNITIES A thesis submitted for the degree of Doctor of Philosophy By Bashir Sezuo Tenuche Department Computer Science Brunel University February, 2018 . 2 ABSTRACT

  18. Designing a Research Proposal in Mixed-Method Approach

    Definition. Prominent scholar, John Creswell, defines the mixed-method approach as 'the collection or analysis of both quantitative and qualitative data in a single study' (Creswell & Plano Clark, 2011: 212). The definition points to the use of both quantitative and qualitative techniques to fulfil the aims of a study.

  19. Qualitative, quantitative and mixed methods dissertations

    There are a number of reasons why mixed methods dissertations are used, including the feeling that a research question can be better addressed by: Collecting qualitative and quantitative data, and then analysing or interpreting that data, whether separately or by mixing it. Conducting more than one research phase; perhaps conducting qualitative ...

  20. Mixed Methods Research

    Mixed methods research combines elements of quantitative research and qualitative research in order to answer your research question. Mixed methods can help you gain a more complete picture than a standalone quantitative or qualitative study, as it integrates benefits of both methods. Mixed methods research is often used in the behavioral ...

  21. A discussion of some controversies in mixed methods research for

    The use of mixed methods to study complex social phenomenon goes back to the mid 19th century where most investigators started using both qualitative and quantitative approaches in single studies (Maxwell, 2016).For instance, in 1898, DuBois engaged in field work to obtain data while studying 8000 inhabitants of a slum in Philadelphia, using in-depth house-to-house interviews, a phenomenon ...

  22. Approaches to Mixed Methods Dissemination and Implementation Research

    Mixed methods approaches to these designs typically include qualitative data collection for process and implementation evaluations to ensure understanding of critical factors affecting processes and outcomes at different levels. Such evaluations might include focus group interviews with consumers; individual or focus group interviews with ...

  23. Digital Commons @ Gardner-Webb University

    Digital Commons @ Gardner-Webb University

  24. PDF TITLE IN ALL CAPS by STUDENT NAME in partial fulfillment of the

    A mixed methods dissertation is by definition and combination of two distinct but complimentary research designs. In this section the researcher is to defend why this mixed inquiry is justified and how the approach by its combination of techniques is uniquely qualified to answer the research questions and thereby provide perspectives