UCI Libraries Mobile Site

  • Langson Library
  • Science Library
  • Grunigen Medical Library
  • Law Library
  • Connect From Off-Campus
  • Accessibility
  • Gateway Study Center

Libaries home page

Email this link

Systematic reviews & evidence synthesis methods.

  • Schedule a Consultation / Meet our Team
  • What is Evidence Synthesis?
  • Types of Evidence Synthesis
  • Evidence Synthesis Across Disciplines
  • Finding and Appraising Existing Systematic Reviews
  • 0. Preliminary Searching
  • 1. Develop a Protocol
  • 2. Draft your Research Question
  • 3. Select Databases
  • 4. Select Grey Literature Sources
  • 5. Write a Search Strategy
  • 6. Register a Protocol
  • 7. Translate Search Strategies
  • 8. Citation Management
  • 9. Article Screening
  • 10. Risk of Bias Assessment
  • 11. Data Extraction
  • 12. Synthesize, Map, or Describe the Results
  • Evidence Synthesis Resources & Tools

What are evidence syntheses?

According to the Royal Society, 'evidence synthesis' refers to the process of bringing together information from a range of sources and disciplines to inform debates and decisions on specific issues. They generally include a methodical and comprehensive literature synthesis focused on a well-formulated research question. Their aim is to identify and synthesize all of the scholarly research on a particular topic, including both published and unpublished studies. Evidence syntheses are conducted in an unbiased, reproducible way to provide evidence for practice and policy-making, as well as to identify gaps in the research. Evidence syntheses may also include a meta-analysis, a more quantitative process of synthesizing and visualizing data retrieved from various studies.

Evidence syntheses are much more time-intensive than traditional literature reviews and require a multi-person research team. See this PredicTER tool to get a sense of a systematic review timeline (one type of evidence synthesis). Before embarking on an evidence synthesis, it's important to clearly identify your reasons for conducting one. For a list of types of evidence synthesis projects, see the Types of Evidence Synthesis tab.

How does a traditional literature review differ from evidence synthesis?

One commonly used form of evidence synthesis is a systematic review. This table compares a traditional literature review with a systematic review.

 

Review Question/Topic

Topics may be broad in scope; the goal of the review may be to place one's own research within the existing body of knowledge, or to gather information that supports a particular viewpoint.

Starts with a well-defined research question to be answered by the review. Reviews are conducted with the aim of finding all existing evidence in an unbiased, transparent, and reproducible way.

Searching for Studies

Searches may be ad hoc and based on what the author is already familiar with. Searches are not exhaustive or fully comprehensive.

Attempts are made to find all existing published and unpublished literature on the research question. The process is well-documented and reported.

Study Selection

Often lack clear reasons for why studies were included or excluded from the review.

Reasons for including or excluding studies are explicit and informed by the research question.

Assessing the Quality of Included Studies

Often do not consider study quality or potential biases in study design.

Systematically assesses risk of bias of individual studies and overall quality of the evidence, including sources of heterogeneity between study results.

Synthesis of Existing Research

Conclusions are more qualitative and may not be based on study quality.

Bases conclusion on quality of the studies and provide recommendations for practice or to address knowledge gaps.

Video: Reproducibility and transparent methods (Video 3:25)

Reporting standards

There are some reporting standards for evidence syntheses. These can serve as guidelines for protocol and manuscript preparation and journals may require that these standards are followed for the review type that is being employed (e.g. systematic review, scoping review, etc).​

  • PRISMA checklist Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses.
  • PRISMA-P Standards An updated version of the original PRISMA standards for protocol development.
  • PRISMA - ScR Reporting guidelines for scoping reviews and evidence maps
  • PRISMA-IPD Standards Extension of the original PRISMA standards for systematic reviews and meta-analyses of individual participant data.
  • EQUATOR Network The EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network is an international initiative that seeks to improve the reliability and value of published health research literature by promoting transparent and accurate reporting and wider use of robust reporting guidelines. They provide a list of various standards for reporting in systematic reviews.

Video: Guidelines and reporting standards

PRISMA flow diagram

The PRISMA flow diagram depicts the flow of information through the different phases of an evidence synthesis. It maps the search (number of records identified), screening (number of records included and excluded), and selection (reasons for exclusion). Many evidence syntheses include a PRISMA flow diagram in the published manuscript.

See below for resources to help you generate your own PRISMA flow diagram.

  • PRISMA Flow Diagram Tool
  • PRISMA Flow Diagram Word Template
  • << Previous: Schedule a Consultation / Meet our Team
  • Next: Types of Evidence Synthesis >>
  • Last Updated: Sep 4, 2024 10:30 AM
  • URL: https://guides.lib.uci.edu/evidence-synthesis

Off-campus? Please use the Software VPN and choose the group UCIFull to access licensed content. For more information, please Click here

Software VPN is not available for guests, so they may not have access to some content when connecting from off-campus.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses

Affiliations.

  • 1 Behavioural Science Centre, Stirling Management School, University of Stirling, Stirling FK9 4LA, United Kingdom; email: [email protected].
  • 2 Department of Psychological and Behavioural Science, London School of Economics and Political Science, London WC2A 2AE, United Kingdom.
  • 3 Department of Statistics, Northwestern University, Evanston, Illinois 60208, USA; email: [email protected].
  • PMID: 30089228
  • DOI: 10.1146/annurev-psych-010418-102803

Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems. Although this guide targets psychological scientists, its high level of abstraction makes it potentially relevant to any subject area or discipline. We argue that systematic reviews are a key methodology for clarifying whether and how research findings replicate and for explaining possible inconsistencies, and we call for researchers to conduct systematic reviews to help elucidate whether there is a replication crisis.

Keywords: evidence; guide; meta-analysis; meta-synthesis; narrative; systematic review; theory.

PubMed Disclaimer

Similar articles

  • The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Soll RF, et al. Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
  • Summarizing systematic reviews: methodological development, conduct and reporting of an umbrella review approach. Aromataris E, Fernandez R, Godfrey CM, Holly C, Khalil H, Tungpunkom P. Aromataris E, et al. Int J Evid Based Healthc. 2015 Sep;13(3):132-40. doi: 10.1097/XEB.0000000000000055. Int J Evid Based Healthc. 2015. PMID: 26360830
  • RAMESES publication standards: meta-narrative reviews. Wong G, Greenhalgh T, Westhorp G, Buckingham J, Pawson R. Wong G, et al. BMC Med. 2013 Jan 29;11:20. doi: 10.1186/1741-7015-11-20. BMC Med. 2013. PMID: 23360661 Free PMC article.
  • A Primer on Systematic Reviews and Meta-Analyses. Nguyen NH, Singh S. Nguyen NH, et al. Semin Liver Dis. 2018 May;38(2):103-111. doi: 10.1055/s-0038-1655776. Epub 2018 Jun 5. Semin Liver Dis. 2018. PMID: 29871017 Review.
  • Publication Bias and Nonreporting Found in Majority of Systematic Reviews and Meta-analyses in Anesthesiology Journals. Hedin RJ, Umberham BA, Detweiler BN, Kollmorgen L, Vassar M. Hedin RJ, et al. Anesth Analg. 2016 Oct;123(4):1018-25. doi: 10.1213/ANE.0000000000001452. Anesth Analg. 2016. PMID: 27537925 Review.
  • The Association between Emotional Intelligence and Prosocial Behaviors in Children and Adolescents: A Systematic Review and Meta-Analysis. Cao X, Chen J. Cao X, et al. J Youth Adolesc. 2024 Aug 28. doi: 10.1007/s10964-024-02062-y. Online ahead of print. J Youth Adolesc. 2024. PMID: 39198344
  • The impact of chemical pollution across major life transitions: a meta-analysis on oxidative stress in amphibians. Martin C, Capilla-Lasheras P, Monaghan P, Burraco P. Martin C, et al. Proc Biol Sci. 2024 Aug;291(2029):20241536. doi: 10.1098/rspb.2024.1536. Epub 2024 Aug 28. Proc Biol Sci. 2024. PMID: 39191283 Free PMC article.
  • Target mechanisms of mindfulness-based programmes and practices: a scoping review. Maloney S, Kock M, Slaghekke Y, Radley L, Lopez-Montoyo A, Montero-Marin J, Kuyken W. Maloney S, et al. BMJ Ment Health. 2024 Aug 24;27(1):e300955. doi: 10.1136/bmjment-2023-300955. BMJ Ment Health. 2024. PMID: 39181568 Free PMC article. Review.
  • Bridging disciplines-key to success when implementing planetary health in medical training curricula. Malmqvist E, Oudin A. Malmqvist E, et al. Front Public Health. 2024 Aug 6;12:1454729. doi: 10.3389/fpubh.2024.1454729. eCollection 2024. Front Public Health. 2024. PMID: 39165783 Free PMC article. Review.
  • Strength of evidence for five happiness strategies. Puterman E, Zieff G, Stoner L. Puterman E, et al. Nat Hum Behav. 2024 Aug 12. doi: 10.1038/s41562-024-01954-0. Online ahead of print. Nat Hum Behav. 2024. PMID: 39134738 No abstract available.
  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Ingenta plc
  • Ovid Technologies, Inc.

Other Literature Sources

  • scite Smart Citations

Miscellaneous

  • NCI CPTAC Assay Portal
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • A-Z Publications

Annual Review of Psychology

Volume 70, 2019, review article, how to do a systematic review: a best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses.

  • Andy P. Siddaway 1 , Alex M. Wood 2 , and Larry V. Hedges 3
  • View Affiliations Hide Affiliations Affiliations: 1 Behavioural Science Centre, Stirling Management School, University of Stirling, Stirling FK9 4LA, United Kingdom; email: [email protected] 2 Department of Psychological and Behavioural Science, London School of Economics and Political Science, London WC2A 2AE, United Kingdom 3 Department of Statistics, Northwestern University, Evanston, Illinois 60208, USA; email: [email protected]
  • Vol. 70:747-770 (Volume publication date January 2019) https://doi.org/10.1146/annurev-psych-010418-102803
  • First published as a Review in Advance on August 08, 2018
  • Copyright © 2019 by Annual Reviews. All rights reserved

Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems. Although this guide targets psychological scientists, its high level of abstraction makes it potentially relevant to any subject area or discipline. We argue that systematic reviews are a key methodology for clarifying whether and how research findings replicate and for explaining possible inconsistencies, and we call for researchers to conduct systematic reviews to help elucidate whether there is a replication crisis.

Article metrics loading...

Full text loading...

Literature Cited

  • APA Publ. Commun. Board Work. Group J. Artic. Rep. Stand. 2008 . Reporting standards for research in psychology: Why do we need them? What might they be?. Am. Psychol . 63 : 848– 49 [Google Scholar]
  • Baumeister RF 2013 . Writing a literature review. The Portable Mentor: Expert Guide to a Successful Career in Psychology MJ Prinstein, MD Patterson 119– 32 New York: Springer, 2nd ed.. [Google Scholar]
  • Baumeister RF , Leary MR 1995 . The need to belong: desire for interpersonal attachments as a fundamental human motivation. Psychol. Bull. 117 : 497– 529 [Google Scholar]
  • Baumeister RF , Leary MR 1997 . Writing narrative literature reviews. Rev. Gen. Psychol. 3 : 311– 20 Presents a thorough and thoughtful guide to conducting narrative reviews. [Google Scholar]
  • Bem DJ 1995 . Writing a review article for Psychological Bulletin. Psychol . Bull 118 : 172– 77 [Google Scholar]
  • Borenstein M , Hedges LV , Higgins JPT , Rothstein HR 2009 . Introduction to Meta-Analysis New York: Wiley Presents a comprehensive introduction to meta-analysis. [Google Scholar]
  • Borenstein M , Higgins JPT , Hedges LV , Rothstein HR 2017 . Basics of meta-analysis: I 2 is not an absolute measure of heterogeneity. Res. Synth. Methods 8 : 5– 18 [Google Scholar]
  • Braver SL , Thoemmes FJ , Rosenthal R 2014 . Continuously cumulating meta-analysis and replicability. Perspect. Psychol. Sci. 9 : 333– 42 [Google Scholar]
  • Bushman BJ 1994 . Vote-counting procedures. The Handbook of Research Synthesis H Cooper, LV Hedges 193– 214 New York: Russell Sage Found. [Google Scholar]
  • Cesario J 2014 . Priming, replication, and the hardest science. Perspect. Psychol. Sci. 9 : 40– 48 [Google Scholar]
  • Chalmers I 2007 . The lethal consequences of failing to make use of all relevant evidence about the effects of medical treatments: the importance of systematic reviews. Treating Individuals: From Randomised Trials to Personalised Medicine PM Rothwell 37– 58 London: Lancet [Google Scholar]
  • Cochrane Collab. 2003 . Glossary Rep., Cochrane Collab. London: http://community.cochrane.org/glossary Presents a comprehensive glossary of terms relevant to systematic reviews. [Google Scholar]
  • Cohn LD , Becker BJ 2003 . How meta-analysis increases statistical power. Psychol. Methods 8 : 243– 53 [Google Scholar]
  • Cooper HM 2003 . Editorial. Psychol. Bull. 129 : 3– 9 [Google Scholar]
  • Cooper HM 2016 . Research Synthesis and Meta-Analysis: A Step-by-Step Approach Thousand Oaks, CA: Sage, 5th ed.. Presents a comprehensive introduction to research synthesis and meta-analysis. [Google Scholar]
  • Cooper HM , Hedges LV , Valentine JC 2009 . The Handbook of Research Synthesis and Meta-Analysis New York: Russell Sage Found, 2nd ed.. [Google Scholar]
  • Cumming G 2014 . The new statistics: why and how. Psychol. Sci. 25 : 7– 29 Discusses the limitations of null hypothesis significance testing and viable alternative approaches. [Google Scholar]
  • Earp BD , Trafimow D 2015 . Replication, falsification, and the crisis of confidence in social psychology. Front. Psychol. 6 : 621 [Google Scholar]
  • Etz A , Vandekerckhove J 2016 . A Bayesian perspective on the reproducibility project: psychology. PLOS ONE 11 : e0149794 [Google Scholar]
  • Ferguson CJ , Brannick MT 2012 . Publication bias in psychological science: prevalence, methods for identifying and controlling, and implications for the use of meta-analyses. Psychol. Methods 17 : 120– 28 [Google Scholar]
  • Fleiss JL , Berlin JA 2009 . Effect sizes for dichotomous data. The Handbook of Research Synthesis and Meta-Analysis H Cooper, LV Hedges, JC Valentine 237– 53 New York: Russell Sage Found, 2nd ed.. [Google Scholar]
  • Garside R 2014 . Should we appraise the quality of qualitative research reports for systematic reviews, and if so, how. Innovation 27 : 67– 79 [Google Scholar]
  • Hedges LV , Olkin I 1980 . Vote count methods in research synthesis. Psychol. Bull. 88 : 359– 69 [Google Scholar]
  • Hedges LV , Pigott TD 2001 . The power of statistical tests in meta-analysis. Psychol. Methods 6 : 203– 17 [Google Scholar]
  • Higgins JPT , Green S 2011 . Cochrane Handbook for Systematic Reviews of Interventions, Version 5.1.0 London: Cochrane Collab. Presents comprehensive and regularly updated guidelines on systematic reviews. [Google Scholar]
  • John LK , Loewenstein G , Prelec D 2012 . Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol. Sci. 23 : 524– 32 [Google Scholar]
  • Juni P , Witschi A , Bloch R , Egger M 1999 . The hazards of scoring the quality of clinical trials for meta-analysis. JAMA 282 : 1054– 60 [Google Scholar]
  • Klein O , Doyen S , Leys C , Magalhães de Saldanha da Gama PA , Miller S et al. 2012 . Low hopes, high expectations: expectancy effects and the replicability of behavioral experiments. Perspect. Psychol. Sci. 7 : 6 572– 84 [Google Scholar]
  • Lau J , Antman EM , Jimenez-Silva J , Kupelnick B , Mosteller F , Chalmers TC 1992 . Cumulative meta-analysis of therapeutic trials for myocardial infarction. N. Engl. J. Med. 327 : 248– 54 [Google Scholar]
  • Light RJ , Smith PV 1971 . Accumulating evidence: procedures for resolving contradictions among different research studies. Harvard Educ. Rev. 41 : 429– 71 [Google Scholar]
  • Lipsey MW , Wilson D 2001 . Practical Meta-Analysis London: Sage Comprehensive and clear explanation of meta-analysis. [Google Scholar]
  • Matt GE , Cook TD 1994 . Threats to the validity of research synthesis. The Handbook of Research Synthesis H Cooper, LV Hedges 503– 20 New York: Russell Sage Found. [Google Scholar]
  • Maxwell SE , Lau MY , Howard GS 2015 . Is psychology suffering from a replication crisis? What does “failure to replicate” really mean?. Am. Psychol. 70 : 487– 98 [Google Scholar]
  • Moher D , Hopewell S , Schulz KF , Montori V , Gøtzsche PC et al. 2010 . CONSORT explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ 340 : c869 [Google Scholar]
  • Moher D , Liberati A , Tetzlaff J , Altman DG PRISMA Group. 2009 . Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 339 : 332– 36 Comprehensive reporting guidelines for systematic reviews. [Google Scholar]
  • Morrison A , Polisena J , Husereau D , Moulton K , Clark M et al. 2012 . The effect of English-language restriction on systematic review-based meta-analyses: a systematic review of empirical studies. Int. J. Technol. Assess. Health Care 28 : 138– 44 [Google Scholar]
  • Nelson LD , Simmons J , Simonsohn U 2018 . Psychology's renaissance. Annu. Rev. Psychol. 69 : 511– 34 [Google Scholar]
  • Noblit GW , Hare RD 1988 . Meta-Ethnography: Synthesizing Qualitative Studies Newbury Park, CA: Sage [Google Scholar]
  • Olivo SA , Macedo LG , Gadotti IC , Fuentes J , Stanton T , Magee DJ 2008 . Scales to assess the quality of randomized controlled trials: a systematic review. Phys. Ther. 88 : 156– 75 [Google Scholar]
  • Open Sci. Collab. 2015 . Estimating the reproducibility of psychological science. Science 349 : 943 [Google Scholar]
  • Paterson BL , Thorne SE , Canam C , Jillings C 2001 . Meta-Study of Qualitative Health Research: A Practical Guide to Meta-Analysis and Meta-Synthesis Thousand Oaks, CA: Sage [Google Scholar]
  • Patil P , Peng RD , Leek JT 2016 . What should researchers expect when they replicate studies? A statistical view of replicability in psychological science. Perspect. Psychol. Sci. 11 : 539– 44 [Google Scholar]
  • Rosenthal R 1979 . The “file drawer problem” and tolerance for null results. Psychol. Bull. 86 : 638– 41 [Google Scholar]
  • Rosnow RL , Rosenthal R 1989 . Statistical procedures and the justification of knowledge in psychological science. Am. Psychol. 44 : 1276– 84 [Google Scholar]
  • Sanderson S , Tatt ID , Higgins JP 2007 . Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: a systematic review and annotated bibliography. Int. J. Epidemiol. 36 : 666– 76 [Google Scholar]
  • Schreiber R , Crooks D , Stern PN 1997 . Qualitative meta-analysis. Completing a Qualitative Project: Details and Dialogue JM Morse 311– 26 Thousand Oaks, CA: Sage [Google Scholar]
  • Shrout PE , Rodgers JL 2018 . Psychology, science, and knowledge construction: broadening perspectives from the replication crisis. Annu. Rev. Psychol. 69 : 487– 510 [Google Scholar]
  • Stroebe W , Strack F 2014 . The alleged crisis and the illusion of exact replication. Perspect. Psychol. Sci. 9 : 59– 71 [Google Scholar]
  • Stroup DF , Berlin JA , Morton SC , Olkin I , Williamson GD et al. 2000 . Meta-analysis of observational studies in epidemiology (MOOSE): a proposal for reporting. JAMA 283 : 2008– 12 [Google Scholar]
  • Thorne S , Jensen L , Kearney MH , Noblit G , Sandelowski M 2004 . Qualitative meta-synthesis: reflections on methodological orientation and ideological agenda. Qual. Health Res. 14 : 1342– 65 [Google Scholar]
  • Tong A , Flemming K , McInnes E , Oliver S , Craig J 2012 . Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med. Res. Methodol. 12 : 181– 88 [Google Scholar]
  • Trickey D , Siddaway AP , Meiser-Stedman R , Serpell L , Field AP 2012 . A meta-analysis of risk factors for post-traumatic stress disorder in children and adolescents. Clin. Psychol. Rev. 32 : 122– 38 [Google Scholar]
  • Valentine JC , Biglan A , Boruch RF , Castro FG , Collins LM et al. 2011 . Replication in prevention science. Prev. Sci. 12 : 103– 17 [Google Scholar]
  • Article Type: Review Article

Most Read This Month

Most cited most cited rss feed, job burnout, executive functions, social cognitive theory: an agentic perspective, on happiness and human potentials: a review of research on hedonic and eudaimonic well-being, sources of method bias in social science research and recommendations on how to control it, mediation analysis, missing data analysis: making it work in the real world, grounded cognition, personality structure: emergence of the five-factor model, motivational beliefs, values, and goals.

University of Texas

  • University of Texas Libraries
  • UT Libraries

Systematic Reviews & Evidence Synthesis Methods

  • Types of Reviews
  • Formulate Question
  • Find Existing Reviews & Protocols
  • Register a Protocol
  • Searching Systematically
  • Supplementary Searching
  • Managing Results
  • Deduplication
  • Critical Appraisal
  • Glossary of terms
  • Librarian Support
  • Video tutorials This link opens in a new window
  • Systematic Review & Evidence Synthesis Boot Camp

Once you have completed your analysis, you will want to both summarize and synthesize those results. You may have a qualitative synthesis, a quantitative synthesis, or both.

Qualitative Synthesis

In a qualitative synthesis, you describe for readers how the pieces of your work fit together. You will summarize, compare, and contrast the characteristics and findings, exploring the relationships between them. Further, you will discuss the relevance and applicability of the evidence to your research question. You will also analyze the strengths and weaknesses of the body of evidence. Focus on where the gaps are in the evidence and provide recommendations for further research.

Quantitative Synthesis

Whether or not your Systematic Review includes a full meta-analysis, there is typically some element of data analysis. The quantitative synthesis combines and analyzes the evidence using statistical techniques. This includes comparing methodological similarities and differences and potentially the quality of the studies conducted.

Summarizing vs. Synthesizing

In a systematic review, researchers do more than summarize findings from identified articles. You will synthesize the information you want to include.

While a summary is a way of concisely relating important themes and elements from a larger work or works in a condensed form, a synthesis takes the information from a variety of works and combines them together to create something new.

Synthesis :

"The goal of a systematic synthesis of qualitative research is to integrate or compare the results across studies in order to increase understanding of a particular phenomenon, not to add studies together. Typically the aim is to identify broader themes or new theories – qualitative syntheses usually result in a narrative summary of cross-cutting or emerging themes or constructs, and/or conceptual models."

Denner, J., Marsh, E. & Campe, S. (2017). Approaches to reviewing research in education. In D. Wyse, N. Selwyn, & E. Smith (Eds.), The BERA/SAGE Handbook of educational research (Vol. 2, pp. 143-164). doi: 10.4135/9781473983953.n7

  • Approaches to Reviewing Research in Education from Sage Knowledge

Data synthesis  (Collaboration for Environmental Evidence Guidebook)

Interpreting findings and and reporting conduct   (Collaboration for Environmental Evidence Guidebook)

Interpreting results and drawing conclusions  (Cochrane Handbook, Chapter 15)

Guidance on the conduct of narrative synthesis in systematic reviews  (ESRC Methods Programme)

  • Last Updated: Aug 12, 2024 8:26 AM
  • URL: https://guides.lib.utexas.edu/systematicreviews

Creative Commons License

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

research synthesis systematic review

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

Prevent plagiarism. Run a free check.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved September 3, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

X

Library Services

UCL LIBRARY SERVICES

  • Guides and databases
  • Library skills
  • Systematic reviews

Synthesis and systematic maps

  • What are systematic reviews?
  • Types of systematic reviews
  • Formulating a research question
  • Identifying studies
  • Searching databases
  • Describing and appraising studies
  • Software for systematic reviews
  • Online training and support
  • Live and face to face training
  • Individual support
  • Further help

Searching for information

On this page:

Types of synthesis

  • Systematic evidence map

Synthesis is the process of combining the findings of research studies. A synthesis is also the product and output of the combined studies. This output may be a written narrative, a table, or graphical plots, including statistical meta-analysis. The process of combining studies and the way the output is reported varies according to the research question of the review.

In primary research there are many research questions and many different methods to address them. The same is true of systematic reviews. Two common and different types of review are those asking about the evidence of impact (effectiveness) of an intervention and those asking about ways of understanding a social phenomena.

If a systematic review question is about the effectiveness of an intervention, then the included studies are likely to be experimental studies that test whether an intervention is effective or not. These studies report evidence of the relative effect of an intervention compared to control conditions.

A synthesis of these types of studies aggregates the findings of the studies together. This produces an overall measure of effect of the intervention (after taking into account the sample sizes of the studies). This is a type of quantitative synthesis that is testing a hypothesis (that an intervention is effective) and the review methods are described in advance (using a deductive a priori paradigm).

  • Ongoing developments in meta-analytic and quantitative synthesis methods: Broadening the types of research questions that can be addressed O'Mara-Eves, A. and Thomas, J. (2016). This paper discusses different types of quantitative synthesis in education research.

If a systematic review question is about ways of understanding a social phenomena, it iteratively analyses the findings of studies to develop overarching concepts, theories or themes. The included studies are likely to provide theories, concepts or insights about a phenomena. This might, for example, be studies trying to explain why patients do not always take the medicines provided to them by doctors.

A synthesis of these types of studies is an arrangement or configuration of the concepts from individual studies. It provides overall ‘meta’ concepts to help understand the phenomena under study.  This type of qualitative or conceptual synthesis is more exploratory and some of the detailed methods may develop during the process of the review (using an inductive iterative paradigm).

  • Methods for the synthesis of qualitative research: a critical review ​Barnett-Page and Thomas, (2009). This paper summarises some of the different approaches to qualitative synthesis.

There are also multi-component reviews that ask broad question with sub-questions using different review methods.

  • Teenage pregnancy and social disadvantage: systematic review integrating controlled trials and qualitative studies. Harden et al (2009). An example of a review that combines two types of synthesis. It develops: 1) a statistical meta-analysis of controlled trials on interventions for early parenthood; and 2) a thematic synthesis of qualitative studies of young people views of early parenthood.

Systematic evidence maps

Systematic evidence maps are a product that describe the nature of research in an area. This is in contrast to a synthesis that provides uses research findings to make a statement about an evidence base. A 'systematic map' can both explain what has been studied and also indicate what has not been studied and where there are gaps in the research (gap maps). They can be useful to compare trends and differences across sets of studies.

Systematic maps can be a standalone finished product of research, without a synthesis, or may also be a component a systematic review that will synthesise studies. 

A systematic map can help to plan a synthesis. It may be that the map shows that the studies to be synthesised are very different from each other, and it may be more appropriate to use a subset of the studies. Where a subset of studies is used in the synthesis, the review question and the boundaries of the review will need to be narrowed in order to provide a rigorous approach for selecting the sub-set of studies from the map. The studies in the map that are not synthesised can help with interpreting the synthesis and drawing conclusions. Please note that, confusingly, the 'scoping review' is sometimes used by people to describe systematic evidence maps and at other times to refer to reviews that are quick, selective scopes of the nature and size of literature in an area.

A systematic map may be published in different formats, such as a written report or database. Increasingly, maps are published as databases with interactive visualisations to enable the user to investigate and visualise different parts of the map. Living systematic maps are regularly updated so the evidence stays current.

Some examples of different maps are shown here:

  • Women in Wage Labour: An evidence map of what works to increase female wage labour market participation in LMICs Filters Example of a systematic evidence map from the Africa Centre for Evidence.
  • Acceptability and uptake of vaccines: Rapid map of systematic reviews Example of a map of systematic reviews.
  • COVID-19: a living systematic map of the evidence Example of a living map of health research on COVID-19.

Meta-analysis

  • What is a meta-analysis? Helpful resource from the University of Nottingham.
  • MetaLight: software for teaching and learning meta-analysis Software tool that can help in learning about meta-analysis.
  • KTDRR Research Evidence Training: An Overview of Effect Sizes and Meta-analysis Webcast video (56 mins). Overview of effect sizes and meta-analysis.
  • << Previous: Describing and appraising studies
  • Next: Software for systematic reviews >>
  • Last Updated: Aug 2, 2024 9:22 AM
  • URL: https://library-guides.ucl.ac.uk/systematic-reviews

SMU Libraries logo

  •   SMU Libraries
  • Scholarship & Research
  • Teaching & Learning
  • Bridwell Library
  • DeGolyer Library
  • Duda Family Business Library
  • Fondren Library
  • Hamon Arts Library
  • Underwood Law Library
  • Fort Burgwin Library
  • Exhibits & Digital Collections
  • SMU Scholar
  • Special Collections & Archives
  • Connect With Us
  • Research Guides by Subject
  • How Do I . . . ? Guides
  • Find Your Librarian
  • Writing Support

Evidence Syntheses and Systematic Reviews: Overview

  • Choosing a Review

Analyze and Report

What is evidence synthesis.

Evidence Synthesis: general term used to refer to any method of identifying, selecting, and combining results from multiple studies. There are several types of reviews which fall under this term; the main ones are in the table below: 

Types of Reviews

Type of Review Description Search Formal Inclusion Criteria Use of Protocols Results
Systematic Review Comprehensive literature synthesis on a specific research question, typically requires a team Systematic; exhaustive and comprehensive; search of all available evidence Yes Yes Narrative and tables, describes what is known and unknown, recommendations for future research, limitations of findings
Rapid Review A quicker, simplified version of the Systematic Review; to assess what is already known about a policy or practice issue May not involve as many databases as the Systematic Review Yes; determined by time constraints Yes; determined by time constraints Usually narrative summary, tables
Scoping review or systematic map Seeks to identify gaps, trends, themes, and opportunities for evidence synthesis on a broad topic Broad search, exploratory in nature Less rigorous Yes, but not a strict one May critically evaluate existing evidence, summarizes results qualitatively
Umbrella Review Synthesizes evidence from multiple systematic reviews Searches existing systematic reviews and meta-analyses Yes Yes Integrates findings from existing reviews; no statistical pooling; what is known – recommendations from practice; what is unknown – recommendations for future research
Meta-analysis A statistical technique for combining the findings from disparate quantitative studies, may stand alone or be part of a systematic review Searches results from multiple studies on a specific research question; may include unpublished studies Yes Yes Quantitative synthesis; numerical analysis of measures of effect
Narrative literature review Standalone review (not to be confused with a literature review in an empirical study), may be broad or focused, represents a range of levels of comprehensiveness May or may not be comprehensive Varies No Narrative describes what is known and unknown, recommendations for future research, limitations of findings

General Steps for Conducting Systematic Reviews

The number of steps for conducting Evidence Synthesis varies a little, depending on the source that one consults. However, the following steps are generally accepted in how Systematic Reviews are done:

  • Identify a gap in the literature and form a well-developed and answerable research question which will form the basis of your search
  • Select a framework that will help guide the type of study you’re undertaking
  • Different guidelines are used for documenting and reporting the protocols of your systematic review before the review is conducted. The protocol is created following whatever guideline you select.
  • Select Databases and Grey Literature Sources
  • For steps 3 and 4, it is advisable to consult a librarian before embarking on this phase of the review process. They can recommend databases and other sources to use and even help design complex searches.
  • A protocol is a detailed plan for the project, and after it is written, it should be registered with an appropriate registry.
  • Search Databases and Other Sources
  • Not all databases use the same search syntax, so when searching multiple databases, use search syntaxes that would work in individual databases.
  • Use a citation management tool to help store and organize your citations during the review process; great help when de-duplicating your citation results
  • Inclusion and exclusion criteria already developed help you remove articles that are not relevant to your topic. 
  • Assess the quality of your findings to eliminate bias in either the design of the study or in the results/conclusions (generally not done outside of Systematic Reviews).

Extract and Synthesize

  • Extract the data from what's left of the studies that have been analyzed
  • Extraction tools are used to get data from individual studies that will be analyzed or summarized. 
  • Synthesize the main findings of your research

Report Findings

Report the results using a statistical approach or in a narrative form.

Need More Help?

Librarians can:

  • Provide guidance on which methodology best suits your goals
  • Recommend databases and other information sources for searching
  • Design and implement comprehensive and reproducible database-specific search strategies 
  • Recommend software for article screening
  • Assist with the use of citation management
  • Offer best practices on documentation of searches

Related Guides

  • Literature Reviews
  • Choose a Citation Manager
  • Project Management

Steps of a Systematic Review - Video

  • Next: Choosing a Review >>
  • Last Updated: Jun 12, 2024 5:44 PM
  • URL: https://guides.smu.edu/evidencesyntheses

RMIT University

Teaching and Research guides

Systematic reviews.

  • Starting the review
  • About systematic reviews
  • Research question
  • Plan your search
  • Sources to search
  • Search example
  • Screen and analyse

What is synthesis?

Quantitative synthesis (meta-analysis), qualitative synthesis.

  • Further help

Synthesis is a stage in the systematic review process where extracted data (findings of individual studies) are combined and evaluated. The synthesis part of a systematic review will determine the outcomes of the review.

There are two commonly accepted methods of synthesis in systematic reviews:

  • Quantitative data synthesis
  • Qualitative data synthesis

The way the data is extracted from your studies and synthesised and presented depends on the type of data being handled.

If you have quantitative information, some of the more common tools used to summarise data include:

  • grouping of similar data, i.e. presenting the results in tables
  • charts, e.g. pie-charts
  • graphical displays such as forest plots

If you have qualitative information, some of the more common tools used to summarise data include:

  • textual descriptions, i.e. written words
  • thematic or content analysis

Whatever tool/s you use, the general purpose of extracting and synthesising data is to show the outcomes and effects of various studies and identify issues with methodology and quality. This means that your synthesis might reveal a number of elements, including:

  • overall level of evidence
  • the degree of consistency in the findings
  • what the positive effects of a drug or treatment are, and what these effects are based on
  • how many studies found a relationship or association between two things

In a quantitative systematic review, data is presented statistically. Typically, this is referred to as a meta-analysis . 

The usual method is to combine and evaluate data from multiple studies. This is normally done in order to draw conclusions about outcomes, effects, shortcomings of studies and/or applicability of findings.

Remember, the data you synthesise should relate to your research question and protocol (plan). In the case of quantitative analysis, the data extracted and synthesised will relate to whatever method was used to generate the research question (e.g. PICO method), and whatever quality appraisals were undertaken in the analysis stage.

One way of accurately representing all of your data is in the form of a f orest plot . A forest plot is a way of combining results of multiple clinical trials in order to show point estimates arising from different studies of the same condition or treatment. 

It is comprised of a graphical representation and often also a table. The graphical display shows the mean value for each trial and often with a confidence interval (the horizontal bars). Each mean is plotted relative to the vertical line of no difference.

  • Forest Plots - Understanding a Meta-Analysis in 5 Minutes or Less (5:38 min) In this video, Dr. Maureen Dobbins, Scientific Director of the National Collaborating Centre for Methods and Tools, uses an example from social health to explain how to construct a forest plot graphic.
  • How to interpret a forest plot (5:32 min) In this video, Terry Shaneyfelt, Clinician-educator at UAB School of Medicine, talks about how to interpret information contained in a typical forest plot, including table data.
  • An introduction to meta-analysis (13 mins) Dr Christopher J. Carpenter introduces the concept of meta-analysis, a statistical approach to finding patterns and trends among research studies on the same topic. Meta-analysis allows the researcher to weight study results based on size, moderating variables, and other factors.

Journal articles

  • Neyeloff, J. L., Fuchs, S. C., & Moreira, L. B. (2012). Meta-analyses and Forest plots using a microsoft excel spreadsheet: step-by-step guide focusing on descriptive data analysis. BMC Research Notes, 5(1), 52-57. https://doi.org/10.1186/1756-0500-5-52 Provides a step-by-step guide on how to use Excel to perform a meta-analysis and generate forest plots.
  • Ried, K. (2006). Interpreting and understanding meta-analysis graphs: a practical guide. Australian Family Physician, 35(8), 635- 638. This article provides a practical guide to appraisal of meta-analysis graphs, and has been developed as part of the Primary Health Care Research Evaluation Development (PHCRED) capacity building program for training general practitioners and other primary health care professionals in research methodology.

In a qualitative systematic review, data can be presented in a number of different ways. A typical procedure in the health sciences is  thematic analysis .

As explained by James Thomas and Angela Harden (2008) in an article for  BMC Medical Research Methodology : 

"Thematic synthesis has three stages:

  • the coding of text 'line-by-line'
  • the development of 'descriptive themes'
  • and the generation of 'analytical themes'

While the development of descriptive themes remains 'close' to the primary studies, the analytical themes represent a stage of interpretation whereby the reviewers 'go beyond' the primary studies and generate new interpretive constructs, explanations or hypotheses" (p. 45).

A good example of how to conduct a thematic analysis in a systematic review is the following journal article by Jorgensen et al. (2108) on cancer patients. In it, the authors go through the process of:

(a) identifying and coding information about the selected studies' methodologies and findings on patient care

(b) organising these codes into subheadings and descriptive categories

(c) developing these categories into analytical themes

Jørgensen, C. R., Thomsen, T. G., Ross, L., Dietz, S. M., Therkildsen, S., Groenvold, M., Rasmussen, C. L., & Johnsen, A. T. (2018). What facilitates “patient empowerment” in cancer patients during follow-up: A qualitative systematic review of the literature. Qualitative Health Research, 28(2), 292-304. https://doi.org/10.1177/1049732317721477

Thomas, J., & Harden, A. (2008). Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Medical Research Methodology, 8(1), 45-54. https://doi.org/10.1186/1471-2288-8-45

  • << Previous: Screen and analyse
  • Next: Write >>

Creative Commons license: CC-BY-NC.

  • Last Updated: Aug 30, 2024 4:17 PM
  • URL: https://rmit.libguides.com/systematicreviews

Libraries | Research Guides

Evidence synthesis & systematic reviews.

  • Evidence Synthesis Overview
  • Evidence Synthesis Resources by Discipline

1. Develop a Research Question and Apply a Framework

2. select a reporting guideline, 3. select databases, 4. select grey literature sources, 5. write a search strategy, 6. register a protocol, 7. translate search strategies, 8. manage your citations, 9. article screening, 10. assess the risk of bias, 11. extract the data, 12. synthesize, map, or describe the results.

  • Quantitative Studies (PICO)
  • Qualitative Studies (PICo, CHIP)
  • Mixed Methods (SPICE, SPIDER)
  • Scoping Reviews (PCC)

Formulating a research question is key to a systematic review. It will be the foundation upon which the rest of the research is built. At this stage in the process, you will have identified a knowledge gap in your field, and you are aiming to answer a specific question. For example:

If X is prescribed, what happens to Y patients?

or assess and intervention:

How does X affect Y?

or synthesize existing evidence:

What is the nature of X?

Developing a research question takes time. You will likely go through different versions before settling on a final question. Once you've developed your research question, you will use it to create a search strategy.

Frameworks help to break your question into parts so you can clearly see the elements of your topic. Depending on your field of study, the frameworks listed in this guide may not fit the types of questions you're asking. There are dozens of frameworks you can use to formulate your specific and answerable research question. To see other frameworks you might use, visit the  University of Maryland's Systematic Review guide.

The most common framework for systematic reviews is PICO, which is often used within the health sciences for clinical research, or in education. It is commonly used for quantitative studies.

P: Population

I: Intervention/Exposure

C: Comparison

Example:  In 11-12 year old children (Population), what is the effect of a school-based multi-media learning program  (Intervention) on an increase in real-world problem solving skills compared with analog-only curriculum  (Comparison) within a one-year period (Time)?

Source:  Richardson, W. S., Wilson, M. C., Nishikawa, J., & Hayward, R. S. (1995).  The well-built clinical question: A key to evidence-based decisions .  ACP journal club, 123 (3), A12-A12.

P: Population/problem

I: Phenomenon of Interest

Co: Context

Example:  What are the  experiences  (phenomenon of interest) of  caregivers providing home based care to patients with Alzheimer's disease  (population) in  Australia  (context)?

Source:  Methley, A.M., Campbell, S., Chew-Graham, C.  et al.  PICO, PICOS and SPIDER: a comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews.  BMC Health Serv Res   14,  579 (2014). https://doi.org/10.1186/s12913-014-0579-0

________________________________________________________________________

Example: 

Source:  Shaw, R. (2010).  Conducting literature reviews . In M. A. Forester (Ed.),  Doing Qualitative Research in Psychology: A Practical  Guide  (pp. 39-52). London, Sage.

P: Perspective

I: Intervention/Exposure/Interest

C : Comparison

E: Evaluation

Example:  What are the  benefits  (evaluation) of a  doula  (intervention) for  low income mothers  (perspective) in the  developed world  (setting) compared to  no support  (comparison)?

Source:  Booth, A. (2006). Clear and present questions: Formulating questions for evidence based practice.  Library Hi Tech, 24 (3), 355-368.   https://doi.org/10.1108/07378830610692127

________________________________________________________

PI: Phenomenon of Interest

R: Research Type

Example:  What are the  experiences  (evaluation) of  women  (sample) undergoing  IVF treatment  (phenomenon of interest) as assessed?

Design:   questionnaire or survey or interview

Study Type:  qualitative or mixed method

Source:  Cooke, A., Smith, D., & Booth, A. (2012). Beyond PICO: The SPIDER tool for qualitative evidence synthesis.  Qualitative Health Research, 22 (10), 1435-1443.  https://doi.org/10.1177/1049732312452938

Scoping reviews generally have a broader scope that systematic reviews, but it is still helpful to put scoping and mapping reviews within a framework. The Joanna Briggs Institute offers guidance on forming scoping review questions in Chapter 10 of their manual for evidence synthesis . They recommend using the PCC framework:

Example:  What are the trends (concept) in MOOCs (context) that support the interactions of learners with disabilities (population)?

Peters MDJ, Godfrey C, McInerney P, Munn Z, Tricco AC, Khalil, H. Scoping Reviews (2020). Aromataris E, Lockwood C, Porritt K, Pilla B, Jordan Z, editors. JBI Manual for Evidence Synthesis. JBI; 2024. Available from:  https://synthesismanual.jbi.global .   https://doi.org/10.46658/JBIMES-24-09

  • MARS Meta-Analysis reporting standards From the American Psychological Association (APA).
  • MECCIR (Methodological Expectations of Campbell Collaboration Intervention Reviews) Links to site to download reporting standards for reviews in the social sciences and education.
  • PRISMA A 27-item checklist. PRISMA guidelines are used primarily by those within the health sciences.
  • PRISMA ScR The PRISMA Scoping Review checklist. Created for the health sciences, but can be used across disciplines.

Librarians can assist you with selecting databases for your systematic review. Each database is different and will require a different search syntax. Some databases have controlled vocabulary and thesauri that you will want to incorporate into your searches. We recommend creating one master search strategy and then translating it for each database. 

To begin browsing databases, visit the A-Z Database List:

  • A-Z Databases A-Z list of databases available through the Northwestern University Libraries.
  • Northwestern Research Guides Created by Northwestern Librarians, research guides are curated lists of databases and resources for each discipline.
  • What is Grey Literature?
  • Why Search Grey Literature?
  • How do I search Grey Literature?
  • Sources for Grey Literature

Grey (or gray) Literature is "A variety of written materials produced by organizations outside of traditional commercial and academic publishing channels, such as annual reports, [theses and dissertations], white papers, or conference proceedings from government agencies, non-governmental organizations, or private companies. Grey literature may be difficult to access because it may not be widely distributed or included in bibliographic databases." 

Your research question and field of study will guide what type of grey literature to include in your systematic review. 

Source: Byrne, D. (2017). Reviewing the literature.  Project Planner . 10.4135/9781526408518.

The purpose of a systematic review is to identify and synthesize all available evidence. There is significant bias in scientific publishing toward publishing studies that show some sort of significant effect. In fact, according to Campbell Collaboration Guidelines on Information Retrieval , more than 50% of studies reported in conference abstracts never reach full publication. While conference abstracts and other grey literature is not peer-reviewed, it is important to include all available research on the topic you're studying.

Finding grey literature on your topic may require some creativity, and may involve going directly to the source. Here are a few tips:

  • Find a systematic review on a topic similar to yours and see what grey literature sources they used. You can find existing systematic reviews in subject databases, The Campbell Library, and the Cochrane Library. In databases such as PsycINFO, you can use the Methodology search tool to narrow by Systematic Review or Meta-Analysis; otherwise check the thesaurus for controlled vocabulary or use the keyword search to add ("systematic review" OR meta-analysis OR "scoping review") to your search string.
  • Ask colleagues and other experts in the field for sources of grey literature in your discipline.
  • Contact known researchers in the field to learn if there are any unpublished or ongoing studies to be aware of.
  • On the web, search professional associations, research funders, and government websites.
  • ProQuest Dissertations & Theses Global This link opens in a new window With more than 2 million entries, PQD&T offers comprehensive listings for U.S. doctoral dissertations back to 1861, with extensive coverage of dissertations from many non-U.S. institutions. A number of masters theses are also listed. Thousands of dissertations are available full text, and abstracts are included for dissertations from the mid-1980s forward.
  • Networked Digital Library of Theses and Dissertations (NDLTD) An international organization dedicated to promoting the adoption, creation, use, dissemination, and preservation of electronic theses and dissertations (ETDs).
  • WHO Institutional Repository for Resource Sharing Institutional WHO database of intergovernmental policy documents and technical reports. Can search by IRIS by region (Africa, Americas, Eastern Mediterranean, Europe, South-East Asia, Western Pacific)
  • OCLC PapersFIrst OCLC index of papers presented at conferences worldwide
  • OSF Preprints Center for Open Science Framework's search tool for scholarly preprints in the fields of architecture, arts, business, social and behavioral science, and more.
  • Directory of Open Access Repositories Global Directory of Open Access Repositories. You can search and browse through thousands of registered repositories based on a range of features, such as location, software or type of material held.
  • Social Science Research Network This link opens in a new window A service providing scholarly research papers, working papers, and journals in numerous social science disciplines. Includes the following: Accounting Research Network, Cognitive Science Network, Economics Research Network, Entrepreneurship Research & Policy Network, Financial Economics Network, Legal Scholarship Network, Management Research Network.

Use the keywords from your research question and begin to create a core keyword search that can then be translated to fit each database search. Since the goal is to be as comprehensive as possible, you will want to identify all terms that may be used for each of the keywords, and use a combination of natural language and controlled vocabulary when available. Librarians are available to assist with search strategy development and keyword review.

Your core keyword search will likely include some or all of the following syntax:

  • Boolean operators (AND, OR, and NOT) 
  • Proximity operators (NEAR or WITHIN)
  • Synonyms, related terms, and alternate spellings
  • Controlled vocabulary (found within the database thesaurus)
  • Truncation (ex: preg* would find pregnant and pregnancy)

Search filters that are built into databases may also be used, but use them with caution. Database articles within the social sciences tend not to be as consistently or thoroughly indexed as those within the health sciences, so using filters could cause you to miss some relevant results.

Source:  Kugley S, Wade A, Thomas J, Mahood Q, Jørgensen AMK, Hammerstrøm K, Sathe N. Searching for studies: A guide to information retrieval for Campbell Systematic Reviews . Campbell Methods Guides 2016:1 DOI: 10.4073/cmg.2016.1

  • Recording Synonyms Worksheet Template you can use when creating lists of search terms.
  • SnowGlobe A program that assists with literature searching, SnowGlobe takes all known relevant papers and searches through their references (which papers they cite) and citations (which papers cite them).

A protocol is a detailed explanation of your research project that should be written before you begin searching. It will likely include your research question, objectives, and search methodology, but information included within a protocol can vary across disciplines. The protocol will act as a map for you and your team, and will be helpful in the future if you or any other researchers want to replicate your search. Protocol development resources and registries:

  • PRISMA-P A checklist of recommended items for inclusion within a systematic review protocol.
  • Evidence Synthesis Protocol Template Developed by Cornell University Library, the protocol template is a useful tool that can be used to begin writing your protocol.
  • Campbell Collaboration: Submit a Proposal The Campbell Collaboration follows MECCIR reporting standards. If you register with Campbell, you are agreeing to publish the completed review with Campbell first. According to the title registration page, "Co-publication with other journals is possible only after discussing with the Campbell Coordinating Group and Editor in Chief." more... less... Disciplines: Business and Management, Crime and Justice, Disability, Education, International Development, Knowledge Translation and Implementation, Methods, Nutrition, and Social Welfare
  • PROSPERO registry "PROSPERO accepts registrations for systematic reviews, rapid reviews and umbrella reviews. PROSPERO does not accept scoping reviews or literature scans." more... less... Disciplines: health sciences and social care
  • Open Science Framework (OSF) registry If your review doesn't fit into one of the major registries, consider using Open Science Framework. OSF can be used to pre-register a systematic review protocol and to share documents such as a Zotero library, search strategies, and data extraction forms. more... less... Disciplines: multidisciplinary

Each database is different and will require a customized search string. We recommend creating one master keyword list and then translating it for each database by using that database's subject terms and search syntax. Below are some tools to assist with translating search strings from one database to the next.

  • Translating Search Strategies Template Created at Cornell University Library
  • Database Syntax Guide (Cochrane) Includes syntax for Cochrane Library, EBSCO, ProQuest, Ovid, and POPLINE.
  • Systematic Review Search Translator The IEBH SR-Accelerator is a suite of tools to speed up steps in the Systematic Review (SR) process.

When conducting a systematic review, you will likely be exporting hundreds or even thousands of citations from databases. Citation management tools are useful for storing, organizing, and managing your citations. They can also perform de-duplication to remove doubles of any citations you may have. The Libraries provide training and support on EndNote, Zotero, and Mendeley. Visit the links below to get started. You may also reach out directly to  [email protected]  with questions or consultation requests.

  • EndNote Support Guide
  • Mendeley Support Guide
  • Zotero Support Guide

During the screening process, you will take all of the articles you exported from your searches and begin to remove studies that are not relevant to your topic. Use the inclusion/exclusion criteria you developed during the protocol-writing stage to screen the title and abstract of the articles you found. Any studies that don't fit the criteria of your review can be deleted. The full text of the remaining studies will need to be screened to confirm that they fit the criteria of your review.

It is highly recommended that two independent reviewers screen all studies, resolving areas of disagreement by consensus or by a third party who is an expert in the field. Listed below are tools that can be used for article screening.

  • Rayyan A tool designed to expedite the screening process for systematic reviews. Create a free account, upload citations, and collaborate with others to screen your articles.
  • Covidence A subscription based systematic review management tool that provides article screening and quality assessment features. Northwestern does not currently have a subscription, so individual/group pricing applies.

Bias refers to factors that can systematically affect the observations and conclusions of the study, causing them to be inaccurate. When compiling studies for systematic reviews, it is best practice to assess the risk of bias for each of the studies included, and then include the assessment in your final manuscript. The Cochrane Handbook recommends presenting the assessment as a table or graph.

In general, scoping reviews don't require a risk of bias assessment, but according to the PRISMA Scoping Review checklist , scoping reviews should include a "critical appraisal of individual sources of evidence." In a final manuscript, a critical appraisal could be an explanation of the limitations of the studies included.

Source: Andrea C. Tricco, Erin Lillie, Wasifa Zarin, et al.  PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation . Ann Intern Med.2018;169:467-473. [Epub ahead of print 4 September 2018]. doi: 10.7326/M18-0850

  • Cochrane Training Presentation: Risk of Bias Simple overview of risk of bias assessment, including examples of how to assess and present your conclusions.
  • Critical Appraisal Skills Programme (CASP) CASP has appraisal checklists designed for use with Systematic Reviews, Randomised Controlled Trials, Cohort Studies, Case Control Studies, Economic Evaluations, Diagnostic Studies, Qualitative studies and Clinical Prediction Rule.
  • JBI Critical Appraisal Tools From the Joanna Briggs Institute: "JBI’s critical appraisal tools assist in assessing the trustworthiness, relevance and results of published papers."

Once you and your team have screened all of the studies to be included in your review, you will need to extract the data from the studies in order to synthesize the results. You can use Excel or Google Forms to code the results. Additional resources below.

  • Covidence: Data Extraction Covidence is a software that manages all aspects of systematic review processes, including data extraction. Northwestern does not currently subscribe to Covidence, so individual subscription rates apply.
  • Data Extraction Form Template (Excel)
  • RevMan Short for "review manager," RevMan is a free software used to manage Cochrane systematic reviews. It can assist with data extraction and analysis, including meta-analysis.
  • SR Toolbox "a web-based catalogue of tools that support various tasks within the systematic review and wider evidence synthesis process."
  • Systematic Review Data Repository "The Systematic Review Data Repository (SRDR) is a powerful and easy-to-use tool for the extraction and management of data for systematic review or meta-analysis."
  • A Practical Guide: Data Extraction for Intervention Systematic Reviews' "This guide provides you with insights from the global systematic review community, including definitions, practical advice, links to the Cochrane Handbook, downloadable templates, and real-world examples." -Covidence Free ebook download (must enter information to download the title for free)

In the data synthesis section, you will present the main findings of your evidence synthesis. There are multiple ways you could go about synthesizing the data, and that decision will depend largely on the type of studies you're synthesizing. In any case, it is standard to use the PRISMA flow diagram to map out the number of studies identified, screened, and included in your evidence synthesis project.

Librarians can help write the methods section of your review for publication, to ensure clarity and transparency of the search process. However, we encourage evidence synthesis teams to engage statisticians to carry out their data syntheses.

  • PRISMA Flow Diagram
  • PRISMA Flow Diagram Creator

Meta-Analysis

A quantitative statistical analysis that combines the results of multiple studies. The studies included must all be attempting to answer the same research question and have a similar research design. According to the  Cochrane Handbook,  "meta-analysis yields an overall statistic (together with its confidence interval) that summarizes the effectiveness of an experimental intervention compared with a comparator intervention."

  • Meta-Analysis Effect Size Calculator "...a web-based effect-size calculator. It is designed to facilitate the computation of effect-sizes for meta-analysis. Four effect-size types can be computed from various input data: the standardized mean difference, the correlation coefficient, the odds-ratio, and the risk-ratio."
  • Meta-Essentials A free tool for meta-analysis that "facilitates the integration and synthesis of effect sizes from different studies. The tool consists of a set of workbooks designed for Microsoft Excel that, based on your input, automatically produces all the required statistics, tables, figures, and more."
  • The metafor Package "a free and open-source add-on for conducting meta-analyses with the statistical software environment R."

Narrative or Descriptive

If you've included studies that are not similar in research design, then a meta-analysis is not possible. You will then use a narrative or descriptive synthesis to describe the results.

  • << Previous: Evidence Synthesis Resources by Discipline
  • Last Updated: Jul 25, 2024 3:13 PM
  • URL: https://libguides.northwestern.edu/evidencesynthesis

_KINS5594: Guide to Systematic Searching for Evidence Synthesis Projects

  • Developing & Documenting Your Search Strategy
  • Preparing to Search PubMed
  • Managing References As You Search
  • Executing a Systematic Search in PubMed
  • Using Search Hedges in PubMed
  • Scheduling Your Meeting
  • After Your Meeting
  • Searching the Cochrane Library
  • Dealing with Duplicate References

What Will You Learn on This Page?

What is rayyan, screening references with rayyan, help with rayyan, what's next.

  • Congratulations & Next Steps

On this page of the research guide, you will learn:

Once you've engaged with all the content on this page, you should:

Rayyan is a user-friendly tool which enables a single person or a team to perform masked screening of references for evidence synthesis projects. It has some excellent features, especially if you're working with a large set of results. Rayyan is designed for screening, not for citation management or citing while writing! You use it in conjunction with a citation management tool like RefWorks or Zotero.

Rayyan allows you to:

  • Highlight keywords to help you quickly identify important information in references.
  • Rapidly mark references for inclusion or exclusion, including labeling them with custom reasons.
  • Use machine learning and your existing screening decisions to sort unscreened items based on likely relevance.
  • Rayyan Systematic review screening tool.

To create your account, choose the Sign-Up link at the top of the page.

research synthesis systematic review

Select the Free account option and complete the sign-up process. Rayyan does offer subscriptions, but a free account is sufficient for many people's needs.

Watch the video below to learn how to create a new Rayyan project, import references from your citation manager, use Rayyan to screen those references, and finally export them back into your citation manager.

  • Screening References with Rayyan transcript

Have questions about using Rayyan? Check out their help documentation! If you like to reduce your reliance on your mouse and speed up some processes, check out the list of keyboard shortcuts, too.

  • Rayyan Help Center
  • Rayyan Keyboard Shortcuts

Then you're ready to move on to the final page of this guide, Congratulations & Next Steps !

  • << Previous: Dealing with Duplicate References
  • Next: Congratulations & Next Steps >>
  • Last Updated: Sep 4, 2024 4:51 PM
  • URL: https://guides.lib.uconn.edu/kins5594

Creative Commons

  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Editor's Choice
  • Supplements
  • French Abstracts
  • Portuguese Abstracts
  • Spanish Abstracts
  • Author Guidelines
  • Submission Site
  • Open Access
  • About International Journal for Quality in Health Care
  • About the International Society for Quality in Health Care
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Contact ISQua
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Introduction, conclusions, acknolwedgements, author contributions, supplementary data, conflict of interest, data availability.

  • < Previous

Physicians’ perspectives on clinical indicators: systematic review and thematic synthesis

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Ana Renker-Darby, Shanthi Ameratunga, Peter Jones, Corina Grey, Matire Harwood, Roshini Peiris-John, Timothy Tenbensel, Sue Wells, Vanessa Selak, Physicians’ perspectives on clinical indicators: systematic review and thematic synthesis, International Journal for Quality in Health Care , Volume 36, Issue 3, 2024, mzae082, https://doi.org/10.1093/intqhc/mzae082

  • Permissions Icon Permissions

Clinical indicators are increasingly used to improve the quality of care, particularly with the emergence of ‘big data’, but physicians’ views regarding their utility in practice is unclear. We reviewed the published literature investigating physicians’ perspectives, focusing on the following objectives in relation to quality improvement: (1) the role of clinical indicators, (2) what is needed to strengthen them, (3) their key attributes, and (4) the best tool(s) for assessing their quality. A systematic literature search (up to November 2022) was carried out using: Medline, EMBASE, Scopus, CINAHL, PsycInfo, and Web of Science. Articles that met all of the following inclusion criteria were included: reported on physicians’ perspectives on clinical indicators and/or tools for assessing the quality of clinical indicators, addressing at least one of the four review objectives; the clinical indicators related to care at least partially delivered by physicians; and published in a peer-reviewed journal. Data extracted from eligible studies were appraised using the Critical Appraisal Skills Programme tool. A thematic synthesis of data was conducted using NVivo software. Descriptive themes were inductively derived from codes, which were grouped into analytical themes answering each objective. A total of 14 studies were included, with 17 analytical themes identified for objectives 1–3 and no data identified for objective 4. Results showed that indicators can play an important motivating role for physicians to improve the quality of care and show where changes need to be made. For indicators to be effective, physicians should be involved in indicator development, recording relevant data should be straightforward, indicator feedback must be meaningful to physicians, and clinical teams need to be adequately resourced to act on findings. Effective indicators need to focus on the most important areas for quality improvement, be consistent with good medical care, and measure aspects of care within the control of physicians. Studies cautioned against using indicators primarily as punitive measures, and there were concerns that an overreliance on indicators can lead to narrowed perspective of quality of care. This review identifies facilitators and barriers to meaningfully engaging physicians in developing and using clinical indicators to improve the quality of healthcare.

Clinical indicators are measures designed to assess and improve the quality of health services. When they seek to support quality improvement efforts by clinicians, it is critical to meaningfully engage with clinicians in the development and monitoring of these indicators [ 1 ]. Previous research has found that among clinicians, physicians may be resistant to clinical indicator initiatives, particularly when indicators are designed for the purpose of accountability rather than quality improvement [ 2 ]. If physicians are more engaged with indicator development and use, they are more likely to accept and act upon findings from those indicators [ 1 ].

Traditionally, there has been a focus on manual audits of patient records as a mechanism for supporting quality improvement. The availability of ‘big data’ (high volumes of diverse electronic data [ 3 ]) has the potential to revolutionize clinical engagement with quality improvement activities [ 4 ]. Rather than undertaking static, intermittent audits of a subset of patients, big data makes it possible to continuously generate electronic data on clinical indicators to improve performance [ 5 ]. Further, techniques such as natural language processing can enhance the clinical relevance of identifiable cohorts by enabling free text as well as structured data to be interrogated systematically [ 6 , 7 ]. To maximize the extent to which these advances translate into improvements in the quality of care, it is critical that, where indicators are designed to support quality improvement efforts by physicians, meaningful clinical engagement with physicians is obtained in the development and monitoring of clinical indicators.

Previously Jones et al . developed a Quality Indicator Clinical Appraisal (QICA) tool to appraise indicators based on key attributes identified through a systematic review and survey of quality of care experts [ 8 ]. The QICA tool ‘provides an explicit basis for discussions around indicator selection’ [ 8 ]. However, ensuring that physicians will use and act on clinical indicator data also requires consideration of physicians’ perspectives. The objectives of this study were to determine physicians’ perspectives regarding (1) the role indicators play in supporting quality improvement, (2) what is needed to strengthen the ability of indicators to drive improvements in quality, (3) the ‘key’ attributes of an effective indicator, and (4) the best tool(s) for assessing the quality of indicators.

The systematic review protocol was registered with the International Prospective Register of Systematic Reviews (PROSPERO CRD42020152496).

Search strategy

A systematic literature search was carried out using Medline, EMBASE, Scopus, Cochrane, CINAHL, PsycInfo, and Web of Science (searched up to November 2022). The search strategies are provided in Supplementary Appendix S1 . One reviewer (A.R.) screened all (deduplicated) titles and abstracts using the inclusion and exclusion criteria. Full texts were then screened by A.R., and included texts were discussed with a co-author (V.S.).

Inclusion and exclusion criteria

Articles were included if they: reported on data from physicians, either as a group or subgroup; focused on clinical indicators and/or tools for assessing the quality of clinical indicators; reported on physicians’ perspectives on clinical indicators and/or tools for assessing the quality of clinical indicators; related to clinical care at least partially delivered by physicians; were published in a peer-reviewed journal; and addressed at least one of the four objectives from the perspective of physicians.

Articles were excluded if they: reported on data from health professionals, patients and/or family members without any physicians or without separate reporting for physicians; focused on evaluating quality of care, clinical guidelines, models of care or diagnostic criteria; were in a language other than English; were editorial or opinion pieces; or had insufficient data to adequately interpret the results. No time restrictions or methodological restrictions were applied.

Quality appraisal

As all included citations used qualitative methodologies, we selected the 10-item Critical Appraisal Skills Programme (CASP) qualitative studies tool [ 9 ] to assess and enable the quality of each citation to be factored in the review. A.R. reviewed the studies using the CASP checklist and discussed findings with V.S. until consensus was reached.

Data extraction and thematic synthesis

The method of thematic synthesis was adapted from Thomas and Harden [ 10 ] and guided by that detailed by Braun and Clarke [ 11 ]. A.R. and V.S. independently screened the results and discussion sections of all included articles to extract data consistent with the inclusion and exclusion criteria for this review (including direct quotes from study participants and author interpretations). Differences between A.R. and V.S. in data inclusion decisions were discussed until consensus was reached. A.R. coded each sentence or phrase with one or more codes using NVivo software [ 11 ]. Descriptive themes were inductively derived from codes, which were then grouped into analytical themes answering each of the objectives. Codes were renamed iteratively where relevant or combined if they addressed similar ideas. Themes were discussed between V.S. and A.R. until consensus was reached, and then reviewed and approved by all authors.

Ethics approval was not required as this study is a systematic review of published literature.

Study selection

From 4620 initial citations, a total of 14 studies were included in the study ( Fig. 1 ). These included papers were published between 2000 and 2022 and were based in the United Kingdom, United States, Canada, China and Germany, in teaching hospitals, primary care groups, and an ambulatory care organization ( Table 1 ).

PRISMA flow diagram showing four stages of study selection process. 4,617 records were identified through database searching and 3 additional records were identified through other sources. Following deduplication there were 2,200 records. These were screened using the inclusion criteria and 1,809 records were excluded. The remaining 391 articles were assessed for eligibility using their full-text, of which 377 were excluded. The remaining 14 articles were included in the qualitative synthesis.

PRISMA flow diagram of study selection.

Summary of included studies.

Study (year)AimMethodSettingParticipants
Ahmed . (2019)To explore the views of clinician–scientists and quality improvement experts regarding proposed domains of PCC, and to gain an understanding of current practices and opportunities for measurement of PCC at a healthcare system level.Semi-structured interviews (  = 16)Canada, USA, UKClinician–scientists (  = 4), Quality improvement experts (  = 12)
Benn . (2015)
Chapter 6: Qualitative Evaluation*
To conduct a quasi-experimental evaluation of the feedback initiative and its effect on quality of anaesthetic care and perioperative efficiency.Interviews (  = 35)Teaching hospital in London, UKConsultant anaesthetists (  = 24), surgical nursing leads (  = 6), perioperative service leads (  = 5)
Breidenbach . (2021)To identify factors that inhibit of facilitate the usage of PROs for clinical decision-making and monitoring patients in existing structures for oncological care, certified colorectal cancer centres in Germany.Semi-structured interviews (  = 12)Cancer centres participating in EDIUM study in GermanyPhysicians (  = 7), psycho-oncologist (  = 1), nurses (  = 3), physician assistant (  = 1)
D’Lima . (2017)*To report the experience of anaesthetists participating in a long-term initiative to provide comprehensive personalized feedback to consultants on patient-reported quality of recovery indicators in a large London teaching hospital.Semi-structured interviews (  = 21)Teaching hospital in London, UKConsultant anaesthetists (  = 13), surgical nursing leads (  = 6), theatre manager (  = 1), clinical coordinator for recovery (  = 1)
Exworthy . (2003) To review qualitative findings from an empirical study within one English primary care group on the response to a set of clinical performance indicators relating to general practitioners in terms of the effect upon their clinical autonomy.Semi-structured interviews (  = 52)Primary care group in southern England, UKGPs (  = 29), practice nurses (  = 12), practice managers (  = 11)
Gagliardi . (2008)To explore patient, nurse, physician, and manager preferences for cancer care quality indicators.Interviews (  = 30)Two teaching hospitals, CanadaSurgeons (  = 2), radiation oncologists (  = 2), medical oncologist (  = 1), nurses (  = 5), managers (  = 5), patients (  = 15)
Gill . (2012)To explore the perspectives of general practitioners on the introduction of child-specific quality markers to the UK’s Quality Outcomes Framework.Semi-structured interviews (  = 20)Five Primary Care Trusts, EnglandGPs (  = 20)
Gray . (2018)To explore the role that metrics and measurement play in a wide-reaching ‘Lean’-based continuous quality improvement effort carried out in the primary care departments of a large, ambulatory care healthcare organization.Semi-structured interviews (  = 130)Large, multispecialty, ambulatory care organization, USAPrimary care physicians (# of participants not disclosed)
Hicks . (2021)To identify all available patient-reported outcome measures relevant to diseases treated by vascular surgeons and to evaluate vascular surgeon perceptions, barriers to widespread implementation, and concerns regarding PROs.Focus groups (# of focus groups not disclosed)Society for Vascular Surgery, USASociety for Vascular Surgery members (# of participants not disclosed)
Litvin . (2015)To systematically solicit recommendations from Meaningful Use exemplars to inform Stage 3 Meaningful Use clinical quality measure requirements.Focus groups (  = 3)A national Electronic Health Record-based primary care practice-based research network, USAGeneral internists (  = 5) internal medicine/paediatric physicians (  = 2), family medicine physicians (  = 16)
Maxwell . (2002)To investigate the acceptability among general practitioners of a patient-completed post-consultation measure of outcome and its use in conjunction with two further quality indicators: time spent in consultation and patients reporting knowing the doctor well.Focus groups (  = 7)Oxford, Coventry, London, and Edinburgh, UKGPs (  = 46)
Rasooly . (2022)To understand the current state of quality and performance measurement in primary diabetes care, and the facilitators and barriers to their implementation.Interviews (  = 26)Tertiary hospitals CHCs in Shanghai, ChinaPatients (  = 12), family doctors (  = 3), endocrinologists (  = 2), CHC managers (  = 4), policymakers (  = 5)
Van den Heuvel . (2010)To describe and explore the views of German general practitioners on the clinical indicators of the Quality and Outcomes FrameworkFocus groups (  = 7)North-western part of GermanyGPs (  = 54)
Wilkinson . (2000) To investigate reactions to the use of evidence-based cardiovascular and stroke performance indicators within one primary care group.Semi-structured interviews (  = 29)Fifteen practices from a primary care group in southern EnglandGPs (  = 29)
Study (year)AimMethodSettingParticipants
Ahmed . (2019)To explore the views of clinician–scientists and quality improvement experts regarding proposed domains of PCC, and to gain an understanding of current practices and opportunities for measurement of PCC at a healthcare system level.Semi-structured interviews (  = 16)Canada, USA, UKClinician–scientists (  = 4), Quality improvement experts (  = 12)
Benn . (2015)
Chapter 6: Qualitative Evaluation*
To conduct a quasi-experimental evaluation of the feedback initiative and its effect on quality of anaesthetic care and perioperative efficiency.Interviews (  = 35)Teaching hospital in London, UKConsultant anaesthetists (  = 24), surgical nursing leads (  = 6), perioperative service leads (  = 5)
Breidenbach . (2021)To identify factors that inhibit of facilitate the usage of PROs for clinical decision-making and monitoring patients in existing structures for oncological care, certified colorectal cancer centres in Germany.Semi-structured interviews (  = 12)Cancer centres participating in EDIUM study in GermanyPhysicians (  = 7), psycho-oncologist (  = 1), nurses (  = 3), physician assistant (  = 1)
D’Lima . (2017)*To report the experience of anaesthetists participating in a long-term initiative to provide comprehensive personalized feedback to consultants on patient-reported quality of recovery indicators in a large London teaching hospital.Semi-structured interviews (  = 21)Teaching hospital in London, UKConsultant anaesthetists (  = 13), surgical nursing leads (  = 6), theatre manager (  = 1), clinical coordinator for recovery (  = 1)
Exworthy . (2003) To review qualitative findings from an empirical study within one English primary care group on the response to a set of clinical performance indicators relating to general practitioners in terms of the effect upon their clinical autonomy.Semi-structured interviews (  = 52)Primary care group in southern England, UKGPs (  = 29), practice nurses (  = 12), practice managers (  = 11)
Gagliardi . (2008)To explore patient, nurse, physician, and manager preferences for cancer care quality indicators.Interviews (  = 30)Two teaching hospitals, CanadaSurgeons (  = 2), radiation oncologists (  = 2), medical oncologist (  = 1), nurses (  = 5), managers (  = 5), patients (  = 15)
Gill . (2012)To explore the perspectives of general practitioners on the introduction of child-specific quality markers to the UK’s Quality Outcomes Framework.Semi-structured interviews (  = 20)Five Primary Care Trusts, EnglandGPs (  = 20)
Gray . (2018)To explore the role that metrics and measurement play in a wide-reaching ‘Lean’-based continuous quality improvement effort carried out in the primary care departments of a large, ambulatory care healthcare organization.Semi-structured interviews (  = 130)Large, multispecialty, ambulatory care organization, USAPrimary care physicians (# of participants not disclosed)
Hicks . (2021)To identify all available patient-reported outcome measures relevant to diseases treated by vascular surgeons and to evaluate vascular surgeon perceptions, barriers to widespread implementation, and concerns regarding PROs.Focus groups (# of focus groups not disclosed)Society for Vascular Surgery, USASociety for Vascular Surgery members (# of participants not disclosed)
Litvin . (2015)To systematically solicit recommendations from Meaningful Use exemplars to inform Stage 3 Meaningful Use clinical quality measure requirements.Focus groups (  = 3)A national Electronic Health Record-based primary care practice-based research network, USAGeneral internists (  = 5) internal medicine/paediatric physicians (  = 2), family medicine physicians (  = 16)
Maxwell . (2002)To investigate the acceptability among general practitioners of a patient-completed post-consultation measure of outcome and its use in conjunction with two further quality indicators: time spent in consultation and patients reporting knowing the doctor well.Focus groups (  = 7)Oxford, Coventry, London, and Edinburgh, UKGPs (  = 46)
Rasooly . (2022)To understand the current state of quality and performance measurement in primary diabetes care, and the facilitators and barriers to their implementation.Interviews (  = 26)Tertiary hospitals CHCs in Shanghai, ChinaPatients (  = 12), family doctors (  = 3), endocrinologists (  = 2), CHC managers (  = 4), policymakers (  = 5)
Van den Heuvel . (2010)To describe and explore the views of German general practitioners on the clinical indicators of the Quality and Outcomes FrameworkFocus groups (  = 7)North-western part of GermanyGPs (  = 54)
Wilkinson . (2000) To investigate reactions to the use of evidence-based cardiovascular and stroke performance indicators within one primary care group.Semi-structured interviews (  = 29)Fifteen practices from a primary care group in southern EnglandGPs (  = 29)

CHC,  community healthcare centre, GP, General Practitioners, PCC, patient-centred care, PRO, patient-reported outcome, *Articles report on the same study.

Articles report on the same study but retained to incorporate potential differing interpretations of the data.

Quality assessment

The articles used qualitative methodology (either interviews or focus groups). The CASP quality assessment revealed that most included studies met most quality criteria ( Supplementary Appendix S2 ). All studies provided clear description of findings. However, there were some methodological limitations with several studies. Nine of 14 articles did not discuss ethical issues with many failing to report ethics approval of the data collection. The relationship between researchers and participants was not adequately discussed in 11 studies, 2 articles did not specify the research aim, and several others failed to report the participant recruitment strategy and only included a limited discussion of how data were analysed.

Objectives and themes

Data from included studies addressed the first three objectives but no articles addressed the fourth objective.The themes for each objective are described below, and summarized in Table 2 .

Objectives and themes.

ObjectiveThemes
What is the role of clinical indicators in supporting quality improvement?Show where changes need to be made
Motivate physicians to improve quality of care
Increase physicians’ accountability
Can encourage myopic quality improvement
Should be used by physicians, not government or the public
Should not be used punitively
What is needed to strengthen the ability of indicators to drive improvements in quality?Support and participation of physicians in their development
Recording data should be straightforward
Feedback delivered in a way that is helpful for physicians
Availability of sufficient resource for quality improvement
Quality improvement requires working together
Incentives have advantages and disadvantages
Key attributes of effective indicatorsTarget the most important areas for quality improvement
Consistent with good medical care
Within physicians’ control
Reliable
Consider patient-reported measures alongside
ObjectiveThemes
What is the role of clinical indicators in supporting quality improvement?Show where changes need to be made
Motivate physicians to improve quality of care
Increase physicians’ accountability
Can encourage myopic quality improvement
Should be used by physicians, not government or the public
Should not be used punitively
What is needed to strengthen the ability of indicators to drive improvements in quality?Support and participation of physicians in their development
Recording data should be straightforward
Feedback delivered in a way that is helpful for physicians
Availability of sufficient resource for quality improvement
Quality improvement requires working together
Incentives have advantages and disadvantages
Key attributes of effective indicatorsTarget the most important areas for quality improvement
Consistent with good medical care
Within physicians’ control
Reliable
Consider patient-reported measures alongside

What is the role of clinical indicators in supporting quality improvement?

Shows where changes need to be made.

Physicians noted that a key role of clinical indicators was their ability to illuminate specific areas of care requiring change. In many cases, physicians stated that it was only through clinical indicators that they received regular feedback on the quality of their care. Physicians appreciated the objective assessment of quality that clinical indicators provided, as opposed to intuiting where care may require improvement. Physicians also thought that clinical indicators could facilitate up-to-date, evidence-based care, provided that the indicators were based on best practice.

Motivate physicians to improve quality of care

Physicians commented on two ways in which clinical indicators motivated efforts to improve quality of care: first, seeing the clinical indicator feedback was often a prompt for physicians to take action on quality improvement. Physicians expressed that it was difficult to ignore this type of objective feedback. Second, clinical indicator feedback showing improvements in care motivated physicians, as it demonstrated tangible evidence of how quality improvement could translate into improved outcomes. Many physicians also thought that engaging in quality improvement was part of being a ‘good’ physician.

Increase physicians’ accountability

Physicians thought that measuring quality using clinical indicators would make them more accountable for the quality of their care. Some were concerned however that clinical indicators could be used by their organization for performance management, and they feared a loss of autonomy in their practice.

Can encourage myopic quality improvement

Physicians were concerned that clinical indicators could lead to a myopic view and produce unintended consequences. They commented that many of the ‘softer’ aspects of quality were difficult to quantify using indicators and risked being side-lined in favour of areas of care more easily quantified. Physicians were concerned that using clinical indicators may distract them from providing more holistic, patient-centred care. Overall, physicians stressed that clinical indicators should be a means to good care, not an end in themselves.

Should be used by physicians, not government or the public

Physicians stressed that clinical indicators should be used by physicians for the purpose of quality improvement, not by government or the public. They emphasized the potential for indicators to be misinterpreted by those outside the profession and were worried about being held accountable for measures they could not influence. Physicians also highlighted the tensions between their own priorities for quality improvement and the priorities of government or their organization. They thought that government or organization management were more likely to prioritize productivity and efficiency over the quality of patient care, and were worried that clinical indicators could entrench these priorities.

Should not be used punitively

Physicians thought that clinical indicators could either be employed in a ‘soft’ manner to encourage quality improvement or a ‘hard’ manner where poor performance would be criticized or punished. They stressed that this punitive approach would only isolate physicians and was unlikely to improve the quality of care.

What is needed to strengthen the ability of clinical indicators to drive improvements in quality?

Support and participation of physicians in their development.

Physicians thought that clinical indicators were more likely to drive improvements in quality if they had the support of clinicians. Physicians were more inclined to use the indicators to make changes to their practice if they understood their purpose and agreed with the measures. They suggested that one way of ensuring their buy-in was to involve them in the development of clinical indicators.

Recording data should be straightforward

Physicians thought that recording data for clinical indicators could lead to an unmanageable increase in their workload and may require additional support staff. They suggested that recording indicator data should be integrated into their workflow and automated where possible.

Feedback delivered in a way that is helpful for physicians

Physicians had several suggestions for useful ways to deliver clinical indicator feedback. They wanted indicator feedback delivered in a manner that was visually appealing and easy to interpret—most suggested the use of charts rather than tables. Comparison feedback between departments, practices, or individual physicians was also considered useful. Physicians found it helpful to see patterns over time in their feedback. They also highlighted that the timing of feedback was important and should be aligned with appropriate interventions to improve quality.

Availability of sufficient resource for quality improvement

Physicians stated that sufficient resources were required for both the use of clinical indicators and subsequent improvements in quality. Physicians also emphasized that they needed sufficient time and resources to reflect on their practice and makes any changes to respond to indicator feedback and improve quality.

Quality improvement requires working together

Physicians emphasized that measuring quality of care was not enough to improve quality—it was also crucial that they had support to translate feedback into quality improvement. Most importantly, physicians wanted clinical indicator feedback to be linked to a clear action for improvement. They also suggested that quality improvement needed to happen as a team.

Using incentives has advantages and disadvantages

Physicians thought that while tying incentives to clinical indicators could accelerate quality improvement, there was also the potential for unintended consequences and ‘gaming’ the system.

What are the key attributes of effective indicators?

Target the most important areas of care for quality improvement.

Physicians thought that the number of clinical indicators should be limited and only cover the most important areas of care. In particular, physicians suggested a focus on diseases where improved care can have a substantial impact, or a focus on especially high-risk patients. Technical process indicators were also suggested as an important aspect of care to measure. Physicians were generally resistant to productivity-oriented indicators.

Consistent with good medical care

Physicians thought it was important for clinical indicators to be evidence-based and to reflect best practice. They felt that indicators must be consistent with other policies and guidelines, and indicators should not contradict each other.

Within physicians’ control

Physicians thought it was important that clinical indicators measured aspects of care that were within their control. This was particularly important if indicators were tied to incentives or used punitively. Despite many physicians agreeing that outcome indicators measured what was ultimately important, they also expressed concern that outcomes were often affected by factors outside of physicians’ control.

Physicians commented on several important attributes that would increase their trust in clinical indicators being able to drive improvements in quality. Physicians thought there were several important attributes that made a clinical indicator reliable and hence trustworthy. They stated that clinical indicators should be of high quality, valid, precise, technically specific, clearly defined, and only require information that could be measured accurately.

Consider patient-reported measures alongside

Physicians agreed that there was a role for patient-reported outcome measures in driving quality improvement. Patient-reported outcome measures and patient experience indicators were seen as representing one aspect of quality that was important to consider. However, physicians also recommended that such measures should be considered alongside other clinical indicators. They also thought that some aspects of patient experience are subjective and therefore less helpful for quality improvement.

Statement of principal findings

This systematic review found overall agreement that indicators could play a clear role in motivating physicians to improve the quality of care and showing where changes needed to be made. While it was felt that indicators increased physicians’ accountability, it was clear that they should be used by physicians themselves, rather than by the government or the public, and should not be used punitively. There was concern that an overreliance on indicators might lead to myopic quality improvement at the expense of more holistic care. In order to strengthen the ability of indicators to drive improvements in quality, physicians need to support and participate in the process of indicator development, recording relevant data should be straightforward, indicator feedback needs to be meaningful, and physicians and their teams need to be adequately resourced to act on findings.

While it was recognized that incentives might accelerate quality improvement, there was also the risk of unintended consequences and ‘gaming’. Key attributes of effective indicators were a focus on the most important areas for quality improvement, consistency with good medical care, measurement of aspects of care that were within the control of physicians and reliability. While there was support for the use of patient-reported outcome measures alongside clinical indicators, there was a potential disconnect between the supposed subjectivity of these measures and the desire for indicators to be ‘accurate’ or objective.

Strengths and limitations

This thematic synthesis of data identified from a systematic review of the literature was focused on physicians’ views regarding the utility of clinical indicators in practice. This is important to understand given the increasing use of clinical indicators and expectations that physicians will use and act on clinical indicator data. As we did not have access to the raw data from primary studies, our findings represent a synthesis of selected data included in the primary studies as well as the authors’ interpretations of that data. The literature search and coding were performed by one reviewer, which may have resulted in bias in the selection of articles. Lastly, texts in languages other than English were excluded.

There were also several limitations in the literature included in this systematic review. As noted, most (9/14) articles did not discuss ethical issues associated with their research with many failing to report ethics approval of the data collection. Generalizability of the results to all physicians is difficult to ascertain because most participants were primary care physicians. Generalizability of the results may also depend on when the data were obtained (given that perspectives are likely to change over time) and the specific health systems examined in each study. Unfortunately, it was not feasible to disaggregate themes according to study context due to the limited number of included studies.

Interpretation within the context of the wider literature

While our literature search did not return results for physicians’ perspectives on the best tools for appraising the quality of clinical indicators, Jones et al . [ 8 ] have previously developed the quality improvement critical appraisal tool (QICA), to provide an explicit basis for clinical indicator selection. The findings of our review are consistent with key aspects of the QICA tool, including the need for indicators to measure the most important aspects of medical care and to be evidence-based, acceptable, concordant with other measures of the issue, reliable and to consider the potential for unintended effects, such as bias as well as the resource implications of measurement itself [ 12 ]. There were several technical characteristics listed in the QICA tool that were not explored in our systematic review, including the need for a well-defined target population, exclusions and measurement systems, need for indicators to reflect differing cultural values, the power and precision of an indicator to detect clinically important changes beyond random variation, and potential ethical issues involved with data gathering and reporting of results [ 12 ].

The final part of the QICA tool addresses the practical implications of indicator implementation in both data collection and data analysis [ 12 ]. There was significant overlap here between the characteristics included in the tool and those that physicians thought were important in the systematic review. Similar findings included the importance of limiting extra work in collecting data for clinical indicators, ensuring that technology is sufficient, ensuring that indicator feedback is actionable and that the results are understandable by physicians so they can be used to improve the quality of care.

Implications for policy, practice, and research

This review found that indicators can play an important motivating role for physicians to improve the quality of care and show where changes need to be made. For indicators to be effective, physicians should be involved in indicator development, recording relevant data should be straightforward, indicator feedback must be meaningful to physicians, and clinical teams need to be adequately resourced to act on findings. Effective indicators need to focus on the most important areas for quality improvement, be consistent with good medical care, and measure aspects of care within the control of physicians. Studies cautioned against using indicators primarily as punitive measures, and there were concerns that an overreliance on indicators could lead to a narrowed perspective on quality of care.

In this systematic review, we found that physicians believe that they should participate in the development of indicators and control the use of those indicators. However, it is worth noting that there are other legitimate groups and stakeholders that also have an interest in the development and use of indicators. Physicians form one professional group among a broader range of multi-disciplinary health providers, as well as patients themselves, whose perspectives need to be engaged in indicator development. It has also been argued that a key impediment faced by collaborative healthcare teams working towards quality improvement is the ‘structured embeddedness of medical dominance’ [ 13 ]. Balancing the perspectives of multiple professional groups as well as patients while avoiding the tendency for physicians to disengage from the process entirely is one of the challenges for the use of clinical indicators for driving quality improvements in policy as well as practice, and would be a valuable area for future research.

This review identified facilitators and barriers to meaningfully engaging physicians in developing and using clinical indicators to improve the quality of healthcare. Such information will help maximize the extent to which the potential of ‘big data’ in revolutionizing clinical engagement with quality improvement activites is able to be realized.

Not applicable.

Ana Renker-Darby (Data curation, analysis (lead), original draft preparation, reviewing and editing), Shanthi Ameratunga (Analysis, reviewing & editing), Peter Jones (Analysis, reviewing & editing), Corina Grey (Analysis, reviewing & editing), Matire Harwood (Analysis, reviewing & editing), Roshini Peiris-John (Analysis, reviewing & editing), Timothy Tenbensel (Analysis, reviewing & editing), Sue Wells (Analysis, reviewing & editing), Vanessa Selak (Conceptuatlisation, Analysis, reviewing & editing).

Supplementary data is available at IJQHC online.

During the conduct of this research, S.A., C.G., M.H., P.J., R.P.J., V.S., and S.W. received funding for other research projects from the Health Research Council of New Zealand; S.A., C.G., M.H., V.S., and S.W. received funding from the National Heart Foundation of New Zealand and the National Science Challenge (Healthier Lives); V.S. and S.W. received funding from the Auckland Medical Research Foundation, and P.J. received funding from the A+ Trust.

A.R.’s work on this research was funded by a grant from the University of Auckland’s Faculty of Medical and Health Sciences Research Development Fund.

No new data were generated or analysed in support of this research.

Raleigh   VS , Foot   C . Getting the Measure of Quality. Opportunities and Challenges . London : The King’s Fund , 2010 .

Google Scholar

Google Preview

Solberg   LI , Asche   SE , Margolis   KL  et al.    Measuring an organization’s ability to manage change: the change process capability questionnaire and its use for improving depression care . Am J Med Qual   2008 ; 23 : 193 – 200 . https://doi.org/10.1177/1062860608314942

Hemingway   H , Asselbergs   FW , Danesh   J  et al.    Big data from electronic health records for early and late translational cardiovascular research: challenges and potential . Eur Heart J   2018 ; 39 : 1481 – 95 . https://doi.org/10.1093/eurheartj/ehx487

Roski   J , Bo-Linn   GW , Andrews   TA . Creating value in health care through big data: opportunities and policy implications . Health Affairs   2014 ; 33 : 1115 – 22 .

Patel   S , Rajkomar   A , Harrison   JD  et al.    Next-generation audit and feedback for inpatient quality improvement using electronic health record data: a cluster randomised controlled trial . BMJ Qual Saf   2018 ; 27 : 691 – 9 .

Hurrell   M , Stein   A , MacDonald   S . Use of natural language processing to identify significant abnormalities for follow-up in a large accumulation of non-delivered radiology reports . J Health Med Inform   2017 ; 8 :2.

Liao   KP , Cai   T , Savova   GK  et al.    Development of phenotype algorithms using electronic medical records and incorporating natural language processing . BMJ   2015 ; 350 :h1885.

Jones   P , Shepherd   M , Wells   S  et al.    Review article: what makes a good healthcare quality indicator? A systematic review and validation study . Emergency Med Australasia   2014 ; 26 : 113 – 24 . https://doi.org/10.1111/1742-6723.12195

Critical Appraisal Skills Programme . CASP Qualitative Checklist 2018 . https://casp-uk.net/casp-tools-checklists/ ( 1 November 2019 , date last accessed).

Thomas   J , Harden   A . Methods for the thematic synthesis of qualitative research in systematic reviews . BMC Med Res Method   2008 ; 8 :10. https://doi.org/10.1186/1471-2288-8-45

Braun   V , Clarke   V . Using thematic analysis in psychology . Qual Res Psychol   2006 ; 3 : 77 – 101 . https://doi.org/10.1191/1478088706qp063oa

Jones   P . Defining and Validating a Metric for Emergency Department Crowding . Auckland, New Zealand : University of Auckland , 2018 .

Bourgeault   IL , Mulvale   G . Collaborative health care teams in Canada and the USA: confronting the structural embeddedness of medical dominance . Health Sociol Rev   2014 ; 15 : 481 – 95 . https://doi.org/10.5172/hesr.2006.15.5.481

  • quality of care
  • quality improvement
Month: Total Views:
August 2024 107

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1464-3677
  • Print ISSN 1353-4505
  • Copyright © 2024 International Society for Quality in Health Care and Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Research article
  • Open access
  • Published: 10 July 2008

Methods for the thematic synthesis of qualitative research in systematic reviews

  • James Thomas 1 &
  • Angela Harden 1  

BMC Medical Research Methodology volume  8 , Article number:  45 ( 2008 ) Cite this article

395k Accesses

4516 Citations

104 Altmetric

Metrics details

There is a growing recognition of the value of synthesising qualitative research in the evidence base in order to facilitate effective and appropriate health care. In response to this, methods for undertaking these syntheses are currently being developed. Thematic analysis is a method that is often used to analyse data in primary qualitative research. This paper reports on the use of this type of analysis in systematic reviews to bring together and integrate the findings of multiple qualitative studies.

We describe thematic synthesis, outline several steps for its conduct and illustrate the process and outcome of this approach using a completed review of health promotion research. Thematic synthesis has three stages: the coding of text 'line-by-line'; the development of 'descriptive themes'; and the generation of 'analytical themes'. While the development of descriptive themes remains 'close' to the primary studies, the analytical themes represent a stage of interpretation whereby the reviewers 'go beyond' the primary studies and generate new interpretive constructs, explanations or hypotheses. The use of computer software can facilitate this method of synthesis; detailed guidance is given on how this can be achieved.

We used thematic synthesis to combine the studies of children's views and identified key themes to explore in the intervention studies. Most interventions were based in school and often combined learning about health benefits with 'hands-on' experience. The studies of children's views suggested that fruit and vegetables should be treated in different ways, and that messages should not focus on health warnings. Interventions that were in line with these suggestions tended to be more effective. Thematic synthesis enabled us to stay 'close' to the results of the primary studies, synthesising them in a transparent way, and facilitating the explicit production of new concepts and hypotheses.

We compare thematic synthesis to other methods for the synthesis of qualitative research, discussing issues of context and rigour. Thematic synthesis is presented as a tried and tested method that preserves an explicit and transparent link between conclusions and the text of primary studies; as such it preserves principles that have traditionally been important to systematic reviewing.

Peer Review reports

The systematic review is an important technology for the evidence-informed policy and practice movement, which aims to bring research closer to decision-making [ 1 , 2 ]. This type of review uses rigorous and explicit methods to bring together the results of primary research in order to provide reliable answers to particular questions [ 3 – 6 ]. The picture that is presented aims to be distorted neither by biases in the review process nor by biases in the primary research which the review contains [ 7 – 10 ]. Systematic review methods are well-developed for certain types of research, such as randomised controlled trials (RCTs). Methods for reviewing qualitative research in a systematic way are still emerging, and there is much ongoing development and debate [ 11 – 14 ].

In this paper we present one approach to the synthesis of findings of qualitative research, which we have called 'thematic synthesis'. We have developed and applied these methods within several systematic reviews that address questions about people's perspectives and experiences [ 15 – 18 ]. The context for this methodological development is a programme of work in health promotion and public health (HP & PH), mostly funded by the English Department of Health, at the EPPI-Centre, in the Social Science Research Unit at the Institute of Education, University of London in the UK. Early systematic reviews at the EPPI-Centre addressed the question 'what works?' and contained research testing the effects of interventions. However, policy makers and other review users also posed questions about intervention need, appropriateness and acceptability, and factors influencing intervention implementation. To address these questions, our reviews began to include a wider range of research, including research often described as 'qualitative'. We began to focus, in particular, on research that aimed to understand the health issue in question from the experiences and point of view of the groups of people targeted by HP&PH interventions (We use the term 'qualitative' research cautiously because it encompasses a multitude of research methods at the same time as an assumed range of epistemological positions. In practice it is often difficult to classify research as being either 'qualitative' or 'quantitative' as much research contains aspects of both [ 19 – 22 ]. Because the term is in common use, however, we will employ it in this paper).

When we started the work for our first series of reviews which included qualitative research in 1999 [ 23 – 26 ], there was very little published material that described methods for synthesising this type of research. We therefore experimented with a variety of techniques borrowed from standard systematic review methods and methods for analysing primary qualitative research [ 15 ]. In later reviews, we were able to refine these methods and began to apply thematic analysis in a more explicit way. The methods for thematic synthesis described in this paper have so far been used explicitly in three systematic reviews [ 16 – 18 ].

The review used as an example in this paper

To illustrate the steps involved in a thematic synthesis we draw on a review of the barriers to, and facilitators of, healthy eating amongst children aged four to 10 years old [ 17 ]. The review was commissioned by the Department of Health, England to inform policy about how to encourage children to eat healthily in the light of recent surveys highlighting that British children are eating less than half the recommended five portions of fruit and vegetables per day. While we focus on the aspects of the review that relate to qualitative studies, the review was broader than this and combined answering traditional questions of effectiveness, through reviewing controlled trials, with questions relating to children's views of healthy eating, which were answered using qualitative studies. The qualitative studies were synthesised using 'thematic synthesis' – the subject of this paper. We compared the effectiveness of interventions which appeared to be in line with recommendations from the thematic synthesis with those that did not. This enabled us to see whether the understandings we had gained from the children's views helped us to explain differences in the effectiveness of different interventions: the thematic synthesis had enabled us to generate hypotheses which could be tested against the findings of the quantitative studies – hypotheses that we could not have generated without the thematic synthesis. The methods of this part of the review are published in Thomas et al . [ 27 ] and are discussed further in Harden and Thomas [ 21 ].

Qualitative research and systematic reviews

The act of seeking to synthesise qualitative research means stepping into more complex and contested territory than is the case when only RCTs are included in a review. First, methods are much less developed in this area, with fewer completed reviews available from which to learn, and second, the whole enterprise of synthesising qualitative research is itself hotly debated. Qualitative research, it is often proposed, is not generalisable and is specific to a particular context, time and group of participants. Thus, in bringing such research together, reviewers are open to the charge that they de-contextualise findings and wrongly assume that these are commensurable [ 11 , 13 ]. These are serious concerns which it is not the purpose of this paper to contest. We note, however, that a strong case has been made for qualitative research to be valued for the potential it has to inform policy and practice [ 11 , 28 – 30 ]. In our experience, users of reviews are interested in the answers that only qualitative research can provide, but are not able to handle the deluge of data that would result if they tried to locate, read and interpret all the relevant research themselves. Thus, if we acknowledge the unique importance of qualitative research, we need also to recognise that methods are required to bring its findings together for a wide audience – at the same time as preserving and respecting its essential context and complexity.

The earliest published work that we know of that deals with methods for synthesising qualitative research was written in 1988 by Noblit and Hare [ 31 ]. This book describes the way that ethnographic research might be synthesised, but the method has been shown to be applicable to qualitative research beyond ethnography [ 32 , 11 ]. As well as meta-ethnography, other methods have been developed more recently, including 'meta-study' [ 33 ], 'critical interpretive synthesis' [ 34 ] and 'metasynthesis' [ 13 ].

Many of the newer methods being developed have much in common with meta-ethnography, as originally described by Noblit and Hare, and often state explicitly that they are drawing on this work. In essence, this method involves identifying key concepts from studies and translating them into one another. The term 'translating' in this context refers to the process of taking concepts from one study and recognising the same concepts in another study, though they may not be expressed using identical words. Explanations or theories associated with these concepts are also extracted and a 'line of argument' may be developed, pulling corroborating concepts together and, crucially, going beyond the content of the original studies (though 'refutational' concepts might not be amenable to this process). Some have claimed that this notion of 'going beyond' the primary studies is a critical component of synthesis, and is what distinguishes it from the types of summaries of findings that typify traditional literature reviews [e.g. [ 32 ], p209]. In the words of Margarete Sandelowski, "metasyntheses are integrations that are more than the sum of parts, in that they offer novel interpretations of findings. These interpretations will not be found in any one research report but, rather, are inferences derived from taking all of the reports in a sample as a whole" [[ 14 ], p1358].

Thematic analysis has been identified as one of a range of potential methods for research synthesis alongside meta-ethnography and 'metasynthesis', though precisely what the method involves is unclear, and there are few examples of it being used for synthesising research [ 35 ]. We have adopted the term 'thematic synthesis', as we translated methods for the analysis of primary research – often termed 'thematic' – for use in systematic reviews [ 36 – 38 ]. As Boyatzis [[ 36 ], p4] has observed, thematic analysis is "not another qualitative method but a process that can be used with most, if not all, qualitative methods..." . Our approach concurs with this conceptualisation of thematic analysis, since the method we employed draws on other established methods but uses techniques commonly described as 'thematic analysis' in order to formalise the identification and development of themes.

We now move to a description of the methods we used in our example systematic review. While this paper has the traditional structure for reporting the results of a research project, the detailed methods (e.g. precise terms we used for searching) and results are available online. This paper identifies the particular issues that relate especially to reviewing qualitative research systematically and then to describing the activity of thematic synthesis in detail.

When searching for studies for inclusion in a 'traditional' statistical meta-analysis, the aim of searching is to locate all relevant studies. Failing to do this can undermine the statistical models that underpin the analysis and bias the results. However, Doyle [[ 39 ], p326] states that, "like meta-analysis, meta-ethnography utilizes multiple empirical studies but, unlike meta-analysis, the sample is purposive rather than exhaustive because the purpose is interpretive explanation and not prediction" . This suggests that it may not be necessary to locate every available study because, for example, the results of a conceptual synthesis will not change if ten rather than five studies contain the same concept, but will depend on the range of concepts found in the studies, their context, and whether they are in agreement or not. Thus, principles such as aiming for 'conceptual saturation' might be more appropriate when planning a search strategy for qualitative research, although it is not yet clear how these principles can be applied in practice. Similarly, other principles from primary qualitative research methods may also be 'borrowed' such as deliberately seeking studies which might act as negative cases, aiming for maximum variability and, in essence, designing the resulting set of studies to be heterogeneous, in some ways, instead of achieving the homogeneity that is often the aim in statistical meta-analyses.

However you look, qualitative research is difficult to find [ 40 – 42 ]. In our review, it was not possible to rely on simple electronic searches of databases. We needed to search extensively in 'grey' literature, ask authors of relevant papers if they knew of more studies, and look especially for book chapters, and we spent a lot of effort screening titles and abstracts by hand and looking through journals manually. In this sense, while we were not driven by the statistical imperative of locating every relevant study, when it actually came down to searching, we found that there was very little difference in the methods we had to use to find qualitative studies compared to the methods we use when searching for studies for inclusion in a meta-analysis.

Quality assessment

Assessing the quality of qualitative research has attracted much debate and there is little consensus regarding how quality should be assessed, who should assess quality, and, indeed, whether quality can or should be assessed in relation to 'qualitative' research at all [ 43 , 22 , 44 , 45 ]. We take the view that the quality of qualitative research should be assessed to avoid drawing unreliable conclusions. However, since there is little empirical evidence on which to base decisions for excluding studies based on quality assessment, we took the approach in this review to use 'sensitivity analyses' (described below) to assess the possible impact of study quality on the review's findings.

In our example review we assessed our studies according to 12 criteria, which were derived from existing sets of criteria proposed for assessing the quality of qualitative research [ 46 – 49 ], principles of good practice for conducting social research with children [ 50 ], and whether studies employed appropriate methods for addressing our review questions. The 12 criteria covered three main quality issues. Five related to the quality of the reporting of a study's aims, context, rationale, methods and findings (e.g. was there an adequate description of the sample used and the methods for how the sample was selected and recruited?). A further four criteria related to the sufficiency of the strategies employed to establish the reliability and validity of data collection tools and methods of analysis, and hence the validity of the findings. The final three criteria related to the assessment of the appropriateness of the study methods for ensuring that findings about the barriers to, and facilitators of, healthy eating were rooted in children's own perspectives (e.g. were data collection methods appropriate for helping children to express their views?).

Extracting data from studies

One issue which is difficult to deal with when synthesising 'qualitative' studies is 'what counts as data' or 'findings'? This problem is easily addressed when a statistical meta-analysis is being conducted: the numeric results of RCTs – for example, the mean difference in outcome between the intervention and control – are taken from published reports and are entered into the software package being used to calculate the pooled effect size [ 3 , 51 ].

Deciding what to abstract from the published report of a 'qualitative' study is much more difficult. Campbell et al . [ 11 ] extracted what they called the 'key concepts' from the qualitative studies they found about patients' experiences of diabetes and diabetes care. However, finding the key concepts in 'qualitative' research is not always straightforward either. As Sandelowski and Barroso [ 52 ] discovered, identifying the findings in qualitative research can be complicated by varied reporting styles or the misrepresentation of data as findings (as for example when data are used to 'let participants speak for themselves'). Sandelowski and Barroso [ 53 ] have argued that the findings of qualitative (and, indeed, all empirical) research are distinct from the data upon which they are based, the methods used to derive them, externally sourced data, and researchers' conclusions and implications.

In our example review, while it was relatively easy to identify 'data' in the studies – usually in the form of quotations from the children themselves – it was often difficult to identify key concepts or succinct summaries of findings, especially for studies that had undertaken relatively simple analyses and had not gone much further than describing and summarising what the children had said. To resolve this problem we took study findings to be all of the text labelled as 'results' or 'findings' in study reports – though we also found 'findings' in the abstracts which were not always reported in the same way in the text. Study reports ranged in size from a few pages to full final project reports. We entered all the results of the studies verbatim into QSR's NVivo software for qualitative data analysis. Where we had the documents in electronic form this process was straightforward even for large amounts of text. When electronic versions were not available, the results sections were either re-typed or scanned in using a flat-bed or pen scanner. (We have since adapted our own reviewing system, 'EPPI-Reviewer' [ 54 ], to handle this type of synthesis and the screenshots below show this software.)

Detailed methods for thematic synthesis

The synthesis took the form of three stages which overlapped to some degree: the free line-by-line coding of the findings of primary studies; the organisation of these 'free codes' into related areas to construct 'descriptive' themes; and the development of 'analytical' themes.

Stages one and two: coding text and developing descriptive themes

In our children and healthy eating review, we originally planned to extract and synthesise study findings according to our review questions regarding the barriers to, and facilitators of, healthy eating amongst children. It soon became apparent, however, that few study findings addressed these questions directly and it appeared that we were in danger of ending up with an empty synthesis. We were also concerned about imposing the a priori framework implied by our review questions onto study findings without allowing for the possibility that a different or modified framework may be a better fit. We therefore temporarily put our review questions to one side and started from the study findings themselves to conduct an thematic analysis.

There were eight relevant qualitative studies examining children's views of healthy eating. We entered the verbatim findings of these studies into our database. Three reviewers then independently coded each line of text according to its meaning and content. Figure 1 illustrates this line-by-line coding using our specialist reviewing software, EPPI-Reviewer, which includes a component designed to support thematic synthesis. The text which was taken from the report of the primary study is on the left and codes were created inductively to capture the meaning and content of each sentence. Codes could be structured, either in a tree form (as shown in the figure) or as 'free' codes – without a hierarchical structure.

figure 1

line-by-line coding in EPPI-Reviewer.

The use of line-by-line coding enabled us to undertake what has been described as one of the key tasks in the synthesis of qualitative research: the translation of concepts from one study to another [ 32 , 55 ]. However, this process may not be regarded as a simple one of translation. As we coded each new study we added to our 'bank' of codes and developed new ones when necessary. As well as translating concepts between studies, we had already begun the process of synthesis (For another account of this process, see Doyle [[ 39 ], p331]). Every sentence had at least one code applied, and most were categorised using several codes (e.g. 'children prefer fruit to vegetables' or 'why eat healthily?'). Before completing this stage of the synthesis, we also examined all the text which had a given code applied to check consistency of interpretation and to see whether additional levels of coding were needed. (In grounded theory this is termed 'axial' coding; see Fisher [ 55 ] for further discussion of the application of axial coding in research synthesis.) This process created a total of 36 initial codes. For example, some of the text we coded as "bad food = nice, good food = awful" from one study [ 56 ] were:

'All the things that are bad for you are nice and all the things that are good for you are awful.' (Boys, year 6) [[ 56 ], p74]

'All adverts for healthy stuff go on about healthy things. The adverts for unhealthy things tell you how nice they taste.' [[ 56 ], p75]

Some children reported throwing away foods they knew had been put in because they were 'good for you' and only ate the crisps and chocolate . [[ 56 ], p75]

Reviewers looked for similarities and differences between the codes in order to start grouping them into a hierarchical tree structure. New codes were created to capture the meaning of groups of initial codes. This process resulted in a tree structure with several layers to organize a total of 12 descriptive themes (Figure 2 ). For example, the first layer divided the 12 themes into whether they were concerned with children's understandings of healthy eating or influences on children's food choice. The above example, about children's preferences for food, was placed in both areas, since the findings related both to children's reactions to the foods they were given, and to how they behaved when given the choice over what foods they might eat. A draft summary of the findings across the studies organized by the 12 descriptive themes was then written by one of the review authors. Two other review authors commented on this draft and a final version was agreed.

figure 2

relationships between descriptive themes.

Stage three: generating analytical themes

Up until this point, we had produced a synthesis which kept very close to the original findings of the included studies. The findings of each study had been combined into a whole via a listing of themes which described children's perspectives on healthy eating. However, we did not yet have a synthesis product that addressed directly the concerns of our review – regarding how to promote healthy eating, in particular fruit and vegetable intake, amongst children. Neither had we 'gone beyond' the findings of the primary studies and generated additional concepts, understandings or hypotheses. As noted earlier, the idea or step of 'going beyond' the content of the original studies has been identified by some as the defining characteristic of synthesis [ 32 , 14 ].

This stage of a qualitative synthesis is the most difficult to describe and is, potentially, the most controversial, since it is dependent on the judgement and insights of the reviewers. The equivalent stage in meta-ethnography is the development of 'third order interpretations' which go beyond the content of original studies [ 32 , 11 ]. In our example, the step of 'going beyond' the content of the original studies was achieved by using the descriptive themes that emerged from our inductive analysis of study findings to answer the review questions we had temporarily put to one side. Reviewers inferred barriers and facilitators from the views children were expressing about healthy eating or food in general, captured by the descriptive themes, and then considered the implications of children's views for intervention development. Each reviewer first did this independently and then as a group. Through this discussion more abstract or analytical themes began to emerge. The barriers and facilitators and implications for intervention development were examined again in light of these themes and changes made as necessary. This cyclical process was repeated until the new themes were sufficiently abstract to describe and/or explain all of our initial descriptive themes, our inferred barriers and facilitators and implications for intervention development.

For example, five of the 12 descriptive themes concerned the influences on children's choice of foods (food preferences, perceptions of health benefits, knowledge behaviour gap, roles and responsibilities, non-influencing factors). From these, reviewers inferred several barriers and implications for intervention development. Children identified readily that taste was the major concern for them when selecting food and that health was either a secondary factor or, in some cases, a reason for rejecting food. Children also felt that buying healthy food was not a legitimate use of their pocket money, which they would use to buy sweets that could be enjoyed with friends. These perspectives indicated to us that branding fruit and vegetables as a 'tasty' rather than 'healthy' might be more effective in increasing consumption. As one child noted astutely, 'All adverts for healthy stuff go on about healthy things. The adverts for unhealthy things tell you how nice they taste.' [[ 56 ], p75]. We captured this line of argument in the analytical theme entitled 'Children do not see it as their role to be interested in health'. Altogether, this process resulted in the generation of six analytical themes which were associated with ten recommendations for interventions.

Six main issues emerged from the studies of children's views: (1) children do not see it as their role to be interested in health; (2) children do not see messages about future health as personally relevant or credible; (3) fruit, vegetables and confectionery have very different meanings for children; (4) children actively seek ways to exercise their own choices with regard to food; (5) children value eating as a social occasion; and (6) children see the contradiction between what is promoted in theory and what adults provide in practice. The review found that most interventions were based in school (though frequently with parental involvement) and often combined learning about the health benefits of fruit and vegetables with 'hands-on' experience in the form of food preparation and taste-testing. Interventions targeted at people with particular risk factors worked better than others, and multi-component interventions that combined the promotion of physical activity with healthy eating did not work as well as those that only concentrated on healthy eating. The studies of children's views suggested that fruit and vegetables should be treated in different ways in interventions, and that messages should not focus on health warnings. Interventions that were in line with these suggestions tended to be more effective than those which were not.

Context and rigour in thematic synthesis

The process of translation, through the development of descriptive and analytical themes, can be carried out in a rigorous way that facilitates transparency of reporting. Since we aim to produce a synthesis that both generates 'abstract and formal theories' that are nevertheless 'empirically faithful to the cases from which they were developed' [[ 53 ], p1371], we see the explicit recording of the development of themes as being central to the method. The use of software as described can facilitate this by allowing reviewers to examine the contribution made to their findings by individual studies, groups of studies, or sub-populations within studies.

Some may argue against the synthesis of qualitative research on the grounds that the findings of individual studies are de-contextualised and that concepts identified in one setting are not applicable to others [ 32 ]. However, the act of synthesis could be viewed as similar to the role of a research user when reading a piece of qualitative research and deciding how useful it is to their own situation. In the case of synthesis, reviewers translate themes and concepts from one situation to another and can always be checking that each transfer is valid and whether there are any reasons that understandings gained in one context might not be transferred to another. We attempted to preserve context by providing structured summaries of each study detailing aims, methods and methodological quality, and setting and sample. This meant that readers of our review were able to judge for themselves whether or not the contexts of the studies the review contained were similar to their own. In the synthesis we also checked whether the emerging findings really were transferable across different study contexts. For example, we tried throughout the synthesis to distinguish between participants (e.g. boys and girls) where the primary research had made an appropriate distinction. We then looked to see whether some of our synthesis findings could be attributed to a particular group of children or setting. In the event, we did not find any themes that belonged to a specific group, but another outcome of this process was a realisation that the contextual information given in the reports of studies was very restricted indeed. It was therefore difficult to make the best use of context in our synthesis.

In checking that we were not translating concepts into situations where they did not belong, we were following a principle that others have followed when using synthesis methods to build grounded formal theory: that of grounding a text in the context in which it was constructed. As Margaret Kearney has noted "the conditions under which data were collected, analysis was done, findings were found, and products were written for each contributing report should be taken into consideration in developing a more generalized and abstract model" [[ 14 ], p1353]. Britten et al . [ 32 ] suggest that it may be important to make a deliberate attempt to include studies conducted across diverse settings to achieve the higher level of abstraction that is aimed for in a meta-ethnography.

Study quality and sensitivity analyses

We assessed the 'quality' of our studies with regard to the degree to which they represented the views of their participants. In doing this, we were locating the concept of 'quality' within the context of the purpose of our review – children's views – and not necessarily the context of the primary studies themselves. Our 'hierarchy of evidence', therefore, did not prioritise the research design of studies but emphasised the ability of the studies to answer our review question. A traditional systematic review of controlled trials would contain a quality assessment stage, the purpose of which is to exclude studies that do not provide a reliable answer to the review question. However, given that there were no accepted – or empirically tested – methods for excluding qualitative studies from syntheses on the basis of their quality [ 57 , 12 , 58 ], we included all studies regardless of their quality.

Nevertheless, our studies did differ according to the quality criteria they were assessed against and it was important that we considered this in some way. In systematic reviews of trials, 'sensitivity analyses' – analyses which test the effect on the synthesis of including and excluding findings from studies of differing quality – are often carried out. Dixon-Woods et al . [ 12 ] suggest that assessing the feasibility and worth of conducting sensitivity analyses within syntheses of qualitative research should be an important focus of synthesis methods work. After our thematic synthesis was complete, we examined the relative contributions of studies to our final analytic themes and recommendations for interventions. We found that the poorer quality studies contributed comparatively little to the synthesis and did not contain many unique themes; the better studies, on the other hand, appeared to have more developed analyses and contributed most to the synthesis.

This paper has discussed the rationale for reviewing and synthesising qualitative research in a systematic way and has outlined one specific approach for doing this: thematic synthesis. While it is not the only method which might be used – and we have discussed some of the other options available – we present it here as a tested technique that has worked in the systematic reviews in which it has been employed.

We have observed that one of the key tasks in the synthesis of qualitative research is the translation of concepts between studies. While the activity of translating concepts is usually undertaken in the few syntheses of qualitative research that exist, there are few examples that specify the detail of how this translation is actually carried out. The example above shows how we achieved the translation of concepts across studies through the use of line-by-line coding, the organisation of these codes into descriptive themes, and the generation of analytical themes through the application of a higher level theoretical framework. This paper therefore also demonstrates how the methods and process of a thematic synthesis can be written up in a transparent way.

This paper goes some way to addressing concerns regarding the use of thematic analysis in research synthesis raised by Dixon-Woods and colleagues who argue that the approach can lack transparency due to a failure to distinguish between 'data-driven' or 'theory-driven' approaches. Moreover they suggest that, "if thematic analysis is limited to summarising themes reported in primary studies, it offers little by way of theoretical structure within which to develop higher order thematic categories..." [[ 35 ], p47]. Part of the problem, they observe, is that the precise methods of thematic synthesis are unclear. Our approach contains a clear separation between the 'data-driven' descriptive themes and the 'theory-driven' analytical themes and demonstrates how the review questions provided a theoretical structure within which it became possible to develop higher order thematic categories.

The theme of 'going beyond' the content of the primary studies was discussed earlier. Citing Strike and Posner [ 59 ], Campbell et al . [[ 11 ], p672] also suggest that synthesis "involves some degree of conceptual innovation, or employment of concepts not found in the characterisation of the parts and a means of creating the whole" . This was certainly true of the example given in this paper. We used a series of questions, derived from the main topic of our review, to focus an examination of our descriptive themes and we do not find our recommendations for interventions contained in the findings of the primary studies: these were new propositions generated by the reviewers in the light of the synthesis. The method also demonstrates that it is possible to synthesise without conceptual innovation. The initial synthesis, involving the translation of concepts between studies, was necessary in order for conceptual innovation to begin. One could argue that the conceptual innovation, in this case, was only necessary because the primary studies did not address our review question directly. In situations in which the primary studies are concerned directly with the review question, it may not be necessary to go beyond the contents of the original studies in order to produce a satisfactory synthesis (see, for example, Marston and King, [ 60 ]). Conceptually, our analytical themes are similar to the ultimate product of meta-ethnographies: third order interpretations [ 11 ], since both are explicit mechanisms for going beyond the content of the primary studies and presenting this in a transparent way. The main difference between them lies in their purposes. Third order interpretations bring together the implications of translating studies into one another in their own terms, whereas analytical themes are the result of interrogating a descriptive synthesis by placing it within an external theoretical framework (our review question and sub-questions). It may be, therefore, that analytical themes are more appropriate when a specific review question is being addressed (as often occurs when informing policy and practice), and third order interpretations should be used when a body of literature is being explored in and of itself, with broader, or emergent, review questions.

This paper is a contribution to the current developmental work taking place in understanding how best to bring together the findings of qualitative research to inform policy and practice. It is by no means the only method on offer but, by drawing on methods and principles from qualitative primary research, it benefits from the years of methodological development that underpins the research it seeks to synthesise.

Chalmers I: Trying to do more good than harm in policy and practice: the role of rigorous, transparent and up-to-date evaluations. Ann Am Acad Pol Soc Sci. 2003, 589: 22-40. 10.1177/0002716203254762.

Article   Google Scholar  

Oakley A: Social science and evidence-based everything: the case of education. Educ Rev. 2002, 54: 277-286. 10.1080/0013191022000016329.

Cooper H, Hedges L: The Handbook of Research Synthesis. 1994, New York: Russell Sage Foundation

Google Scholar  

EPPI-Centre: EPPI-Centre Methods for Conducting Systematic Reviews. 2006, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=89 ]

Higgins J, Green S, (Eds): Cochrane Handbook for Systematic Reviews of Interventions 4.2.6. 2006, Updated September 2006. Accessed 24th January 2007, [ http://www.cochrane.org/resources/handbook/ ]

Petticrew M, Roberts H: Systematic Reviews in the Social Sciences: A practical guide. 2006, Oxford: Blackwell Publishing

Book   Google Scholar  

Chalmers I, Hedges L, Cooper H: A brief history of research synthesis. Eval Health Prof. 2002, 25: 12-37. 10.1177/0163278702025001003.

Article   PubMed   Google Scholar  

Juni P, Altman D, Egger M: Assessing the quality of controlled clinical trials. BMJ. 2001, 323: 42-46. 10.1136/bmj.323.7303.42.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Mulrow C: Systematic reviews: rationale for systematic reviews. BMJ. 1994, 309: 597-599.

White H: Scientific communication and literature retrieval. The Handbook of Research Synthesis. Edited by: Cooper H, Hedges L. 1994, New York: Russell Sage Foundation

Campbell R, Pound P, Pope C, Britten N, Pill R, Morgan M, Donovan J: Evaluating meta-ethnography: a synthesis of qualitative research on lay experiences of diabetes and diabetes care. Soc Sci Med. 2003, 56: 671-684. 10.1016/S0277-9536(02)00064-3.

Dixon-Woods M, Bonas S, Booth A, Jones DR, Miller T, Sutton AJ, Shaw RL, Smith JA, Young B: How can systematic reviews incorporate qualitative research? A critical perspective. Qual Res. 2006, 6: 27-44. 10.1177/1468794106058867.

Sandelowski M, Barroso J: Handbook for Synthesising Qualitative Research. 2007, New York: Springer

Thorne S, Jensen L, Kearney MH, Noblit G, Sandelowski M: Qualitative meta-synthesis: reflections on methodological orientation and ideological agenda. Qual Health Res. 2004, 14: 1342-1365. 10.1177/1049732304269888.

Harden A, Garcia J, Oliver S, Rees R, Shepherd J, Brunton G, Oakley A: Applying systematic review methods to studies of people's views: an example from public health. J Epidemiol Community Health. 2004, 58: 794-800. 10.1136/jech.2003.014829.

Article   PubMed   PubMed Central   Google Scholar  

Harden A, Brunton G, Fletcher A, Oakley A: Young People, Pregnancy and Social Exclusion: A systematic synthesis of research evidence to identify effective, appropriate and promising approaches for prevention and support. 2006, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=674 ]

Thomas J, Sutcliffe K, Harden A, Oakley A, Oliver S, Rees R, Brunton G, Kavanagh J: Children and Healthy Eating: A systematic review of barriers and facilitators. 2003, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, accessed 4 th July 2008, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=246 ]

Thomas J, Kavanagh J, Tucker H, Burchett H, Tripney J, Oakley A: Accidental Injury, Risk-Taking Behaviour and the Social Circumstances in which Young People Live: A systematic review. 2007, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=1910 ]

Bryman A: Quantity and Quality in Social Research. 1998, London: Unwin

Hammersley M: What's Wrong with Ethnography?. 1992, London: Routledge

Harden A, Thomas J: Methodological issues in combining diverse study types in systematic reviews. Int J Soc Res Meth. 2005, 8: 257-271. 10.1080/13645570500155078.

Oakley A: Experiments in Knowing: Gender and methods in the social sciences. 2000, Cambridge: Polity Press

Harden A, Oakley A, Oliver S: Peer-delivered health promotion for young people: a systematic review of different study designs. Health Educ J. 2001, 60: 339-353. 10.1177/001789690106000406.

Harden A, Rees R, Shepherd J, Brunton G, Oliver S, Oakley A: Young People and Mental Health: A systematic review of barriers and facilitators. 2001, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=256 ]

Rees R, Harden A, Shepherd J, Brunton G, Oliver S, Oakley A: Young People and Physical Activity: A systematic review of barriers and facilitators. 2001, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=260 ]

Shepherd J, Harden A, Rees R, Brunton G, Oliver S, Oakley A: Young People and Healthy Eating: A systematic review of barriers and facilitators. 2001, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, [ http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=258 ]

Thomas J, Harden A, Oakley A, Oliver S, Sutcliffe K, Rees R, Brunton G, Kavanagh J: Integrating qualitative research with trials in systematic reviews: an example from public health. BMJ. 2004, 328: 1010-1012. 10.1136/bmj.328.7446.1010.

Davies P: What is evidence-based education?. Br J Educ Stud. 1999, 47: 108-121. 10.1111/1467-8527.00106.

Newman M, Thompson C, Roberts AP: Helping practitioners understand the contribution of qualitative research to evidence-based practice. Evid Based Nurs. 2006, 9: 4-7. 10.1136/ebn.9.1.4.

Popay J: Moving Beyond Effectiveness in Evidence Synthesis. 2006, London: National Institute for Health and Clinical Excellence

Noblit GW, Hare RD: Meta-Ethnography: Synthesizing qualitative studies. 1988, Newbury Park: Sage

Britten N, Campbell R, Pope C, Donovan J, Morgan M, Pill R: Using meta-ethnography to synthesise qualitative research: a worked example. J Health Serv Res Policy. 2002, 7: 209-215. 10.1258/135581902320432732.

Paterson B, Thorne S, Canam C, Jillings C: Meta-Study of Qualitative Health Research. 2001, Thousand Oaks, California: Sage

Dixon-Woods M, Cavers D, Agarwal S, Annandale E, Arthur A, Harvey J, Katbamna S, Olsen R, Smith L, Riley R, Sutton AJ: Conducting a critical interpretative synthesis of the literature on access to healthcare by vulnerable groups. BMC Med Res Methodol. 2006, 6: 35-10.1186/1471-2288-6-35.

Dixon-Woods M, Agarwal S, Jones D, Young B, Sutton A: Synthesising qualitative and quantitative evidence: a review of possible methods. J Health Serv Res Policy. 2005, 10: 45-53. 10.1258/1355819052801804.

Boyatzis RE: Transforming Qualitative Information. 1998, Sage: Cleveland

Braun V, Clarke V: Using thematic analysis in psychology. Qual Res Psychol. 2006, 3: 77-101. 10.1191/1478088706qp063oa. [ http://science.uwe.ac.uk/psychology/drvictoriaclarke_files/thematicanalysis%20.pdf ]

Silverman D, Ed: Qualitative Research: Theory, method and practice. 1997, London: Sage

Doyle LH: Synthesis through meta-ethnography: paradoxes, enhancements, and possibilities. Qual Res. 2003, 3: 321-344. 10.1177/1468794103033003.

Barroso J, Gollop C, Sandelowski M, Meynell J, Pearce PF, Collins LJ: The challenges of searching for and retrieving qualitative studies. Western J Nurs Res. 2003, 25: 153-178. 10.1177/0193945902250034.

Walters LA, Wilczynski NL, Haynes RB, Hedges Team: Developing optimal search strategies for retrieving clinically relevant qualitative studies in EMBASE. Qual Health Res. 2006, 16: 162-8. 10.1177/1049732305284027.

Wong SSL, Wilczynski NL, Haynes RB: Developing optimal search strategies for detecting clinically relevant qualitative studies in Medline. Medinfo. 2004, 11: 311-314.

Murphy E, Dingwall R, Greatbatch D, Parker S, Watson P: Qualitative research methods in health technology assessment: a review of the literature. Health Technol Assess. 1998, 2 (16):

Seale C: Quality in qualitative research. Qual Inq. 1999, 5: 465-478.

Spencer L, Ritchie J, Lewis J, Dillon L: Quality in Qualitative Evaluation: A framework for assessing research evidence. 2003, London: Cabinet Office

Boulton M, Fitzpatrick R, Swinburn C: Qualitative research in healthcare II: a structured review and evaluation of studies. J Eval Clin Pract. 1996, 2: 171-179. 10.1111/j.1365-2753.1996.tb00041.x.

Article   CAS   PubMed   Google Scholar  

Cobb A, Hagemaster J: Ten criteria for evaluating qualitative research proposals. J Nurs Educ. 1987, 26: 138-143.

CAS   PubMed   Google Scholar  

Mays N, Pope C: Rigour and qualitative research. BMJ. 1995, 311: 109-12.

Medical Sociology Group: Criteria for the evaluation of qualitative research papers. Med Sociol News. 1996, 22: 68-71.

Alderson P: Listening to Children. 1995, London: Barnardo's

Egger M, Davey-Smith G, Altman D: Systematic Reviews in Health Care: Meta-analysis in context. 2001, London: BMJ Publishing

Sandelowski M, Barroso J: Finding the findings in qualitative studies. J Nurs Scholarsh. 2002, 34: 213-219. 10.1111/j.1547-5069.2002.00213.x.

Sandelowski M: Using qualitative research. Qual Health Res. 2004, 14: 1366-1386. 10.1177/1049732304269672.

Thomas J, Brunton J: EPPI-Reviewer 3.0: Analysis and management of data for research synthesis. EPPI-Centre software. 2006, London: EPPI-Centre, Social Science Research Unit, Institute of Education

Fisher M, Qureshi H, Hardyman W, Homewood J: Using Qualitative Research in Systematic Reviews: Older people's views of hospital discharge. 2006, London: Social Care Institute for Excellence

Dixey R, Sahota P, Atwal S, Turner A: Children talking about healthy eating: data from focus groups with 300 9–11-year-olds. Nutr Bull. 2001, 26: 71-79. 10.1046/j.1467-3010.2001.00078.x.

Daly A, Willis K, Small R, Green J, Welch N, Kealy M, Hughes E: Hierarchy of evidence for assessing qualitative health research. J Clin Epidemiol. 2007, 60: 43-49. 10.1016/j.jclinepi.2006.03.014.

Popay J: Moving beyond floccinaucinihilipilification: enhancing the utility of systematic reviews. J Clin Epidemiol. 2005, 58: 1079-80. 10.1016/j.jclinepi.2005.08.004.

Strike K, Posner G: Types of synthesis and their criteria. Knowledge Structure and Use: Implications for synthesis and interpretation. Edited by: Ward S, Reed L. 1983, Philadelphia: Temple University Press

Marston C, King E: Factors that shape young people's sexual behaviour: a systematic review. The Lancet. 2006, 368: 1581-86. 10.1016/S0140-6736(06)69662-1.

Pre-publication history

The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2288/8/45/prepub

Download references

Acknowledgements

The authors would like to thank Elaine Barnett-Page for her assistance in producing the draft paper, and David Gough, Ann Oakley and Sandy Oliver for their helpful comments. The review used an example in this paper was funded by the Department of Health (England). The methodological development was supported by Department of Health (England) and the ESRC through the Methods for Research Synthesis Node of the National Centre for Research Methods. In addition, Angela Harden held a senior research fellowship funded by the Department of Health (England) December 2003 – November 2007. The views expressed in this paper are those of the authors and are not necessarily those of the funding bodies.

Author information

Authors and affiliations.

EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, UK

James Thomas & Angela Harden

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to James Thomas .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors' contributions

Both authors contributed equally to the paper and read and approved the final manuscript.

James Thomas and Angela Harden contributed equally to this work.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2, rights and permissions.

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Thomas, J., Harden, A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol 8 , 45 (2008). https://doi.org/10.1186/1471-2288-8-45

Download citation

Received : 17 April 2008

Accepted : 10 July 2008

Published : 10 July 2008

DOI : https://doi.org/10.1186/1471-2288-8-45

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative Research
  • Primary Study
  • Analytical Theme
  • Healthy Eating
  • Review Question

BMC Medical Research Methodology

ISSN: 1471-2288

research synthesis systematic review

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Int J Prev Med

How to Write a Systematic Review: A Narrative Review

Ali hasanpour dehkordi.

Social Determinants of Health Research Center, Shahrekord University of Medical Sciences, Shahrekord, Iran

Elaheh Mazaheri

1 Health Information Technology Research Center, Student Research Committee, Department of Medical Library and Information Sciences, School of Management and Medical Information Sciences, Isfahan University of Medical Sciences, Isfahan, Iran

Hanan A. Ibrahim

2 Department of International Relations, College of Law, Bayan University, Erbil, Kurdistan, Iraq

Sahar Dalvand

3 MSc in Biostatistics, Health Promotion Research Center, Iran University of Medical Sciences, Tehran, Iran

Reza Ghanei Gheshlagh

4 Spiritual Health Research Center, Research Institute for Health Development, Kurdistan University of Medical Sciences, Sanandaj, Iran

In recent years, published systematic reviews in the world and in Iran have been increasing. These studies are an important resource to answer evidence-based clinical questions and assist health policy-makers and students who want to identify evidence gaps in published research. Systematic review studies, with or without meta-analysis, synthesize all available evidence from studies focused on the same research question. In this study, the steps for a systematic review such as research question design and identification, the search for qualified published studies, the extraction and synthesis of information that pertain to the research question, and interpretation of the results are presented in details. This will be helpful to all interested researchers.

A systematic review, as its name suggests, is a systematic way of collecting, evaluating, integrating, and presenting findings from several studies on a specific question or topic.[ 1 ] A systematic review is a research that, by identifying and combining evidence, is tailored to and answers the research question, based on an assessment of all relevant studies.[ 2 , 3 ] To identify assess and interpret available research, identify effective and ineffective health-care interventions, provide integrated documentation to help decision-making, and identify the gap between studies is one of the most important reasons for conducting systematic review studies.[ 4 ]

In the review studies, the latest scientific information about a particular topic is criticized. In these studies, the terms of review, systematic review, and meta-analysis are used instead. A systematic review is done in one of two methods, quantitative (meta-analysis) and qualitative. In a meta-analysis, the results of two or more studies for the evaluation of say health interventions are combined to measure the effect of treatment, while in the qualitative method, the findings of other studies are combined without using statistical methods.[ 5 ]

Since 1999, various guidelines, including the QUORUM, the MOOSE, the STROBE, the CONSORT, and the QUADAS, have been introduced for reporting meta-analyses. But recently the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement has gained widespread popularity.[ 6 , 7 , 8 , 9 ] The systematic review process based on the PRISMA statement includes four steps of how to formulate research questions, define the eligibility criteria, identify all relevant studies, extract and synthesize data, and deduce and present results (answers to research questions).[ 2 ]

Systematic Review Protocol

Systematic reviews start with a protocol. The protocol is a researcher road map that outlines the goals, methodology, and outcomes of the research. Many journals advise writers to use the PRISMA statement to write the protocol.[ 10 ] The PRISMA checklist includes 27 items related to the content of a systematic review and meta-analysis and includes abstracts, methods, results, discussions, and financial resources.[ 11 ] PRISMA helps writers improve their systematic review and meta-analysis report. Reviewers and editors of medical journals acknowledge that while PRISMA may not be used as a tool to assess the methodological quality, it does help them to publish a better study article [ Figure 1 ].[ 12 ]

An external file that holds a picture, illustration, etc.
Object name is IJPVM-12-27-g001.jpg

Screening process and articles selection according to the PRISMA guidelines

The main step in designing the protocol is to define the main objectives of the study and provide some background information. Before starting a systematic review, it is important to assess that your study is not a duplicate; therefore, in search of published research, it is necessary to review PREOSPERO and the Cochrane Database of Systematic. Sometimes it is better to search, in four databases, related systematic reviews that have already been published (PubMed, Web of Sciences, Scopus, Cochrane), published systematic review protocols (PubMed, Web of Sciences, Scopus, Cochrane), systematic review protocols that have already been registered but have not been published (PROSPERO, Cochrane), and finally related published articles (PubMed, Web of Sciences, Scopus, Cochrane). The goal is to reduce duplicate research and keep up-to-date systematic reviews.[ 13 ]

Research questions

Writing a research question is the first step in systematic review that summarizes the main goal of the study.[ 14 ] The research question determines which types of studies should be included in the analysis (quantitative, qualitative, methodic mix, review overviews, or other studies). Sometimes a research question may be broken down into several more detailed questions.[ 15 ] The vague questions (such as: is walking helpful?) makes the researcher fail to be well focused on the collected studies or analyze them appropriately.[ 16 ] On the other hand, if the research question is rigid and restrictive (e.g., walking for 43 min and 3 times a week is better than walking for 38 min and 4 times a week?), there may not be enough studies in this area to answer this question and hence the generalizability of the findings to other populations will be reduced.[ 16 , 17 ] A good question in systematic review should include components that are PICOS style which include population (P), intervention (I), comparison (C), outcome (O), and setting (S).[ 18 ] Regarding the purpose of the study, control in clinical trials or pre-poststudies can replace C.[ 19 ]

Search and identify eligible texts

After clarifying the research question and before searching the databases, it is necessary to specify searching methods, articles screening, studies eligibility check, check of the references in eligible studies, data extraction, and data analysis. This helps researchers ensure that potential biases in the selection of potential studies are minimized.[ 14 , 17 ] It should also look at details such as which published and unpublished literature have been searched, how they were searched, by which mechanism they were searched, and what are the inclusion and exclusion criteria.[ 4 ] First, all studies are searched and collected according to predefined keywords; then the title, abstract, and the entire text are screened for relevance by the authors.[ 13 ] By screening articles based on their titles, researchers can quickly decide on whether to retain or remove an article. If more information is needed, the abstracts of the articles will also be reviewed. In the next step, the full text of the articles will be reviewed to identify the relevant articles, and the reason for the removal of excluded articles is reported.[ 20 ] Finally, it is recommended that the process of searching, selecting, and screening articles be reported as a flowchart.[ 21 ] By increasing research, finding up-to-date and relevant information has become more difficult.[ 22 ]

Currently, there is no specific guideline as to which databases should be searched, which database is the best, and how many should be searched; but overall, it is advisable to search broadly. Because no database covers all health topics, it is recommended to use several databases to search.[ 23 ] According to the A MeaSurement Tool to Assess Systematic Reviews scale (AMSTAR) at least two databases should be searched in systematic and meta-analysis, although more comprehensive and accurate results can be obtained by increasing the number of searched databases.[ 24 ] The type of database to be searched depends on the systematic review question. For example, in a clinical trial study, it is recommended that Cochrane, multi-regional clinical trial (mRCTs), and International Clinical Trials Registry Platform be searched.[ 25 ]

For example, MEDLINE, a product of the National Library of Medicine in the United States of America, focuses on peer-reviewed articles in biomedical and health issues, while Embase covers the broad field of pharmacology and summaries of conferences. CINAHL is a great resource for nursing and health research and PsycINFO is a great database for psychology, psychiatry, counseling, addiction, and behavioral problems. Also, national and regional databases can be used to search related articles.[ 26 , 27 ] In addition, the search for conferences and gray literature helps to resolve the file-drawn problem (negative studies that may not be published yet).[ 26 ] If a systematic review is carried out on articles in a particular country or region, the databases in that region or country should also be investigated. For example, Iranian researchers can use national databases such as Scientific Information Database and MagIran. Comprehensive search to identify the maximum number of existing studies leads to a minimization of the selection bias. In the search process, the available databases should be used as much as possible, since many databases are overlapping.[ 17 ] Searching 12 databases (PubMed, Scopus, Web of Science, EMBASE, GHL, VHL, Cochrane, Google Scholar, Clinical trials.gov, mRCTs, POPLINE, and SIGLE) covers all articles published in the field of medicine and health.[ 25 ] Some have suggested that references management software be used to search for more easy identification and removal of duplicate articles from several different databases.[ 20 ] At least one search strategy is presented in the article.[ 21 ]

Quality assessment

The methodological quality assessment of articles is a key step in systematic review that helps identify systemic errors (bias) in results and interpretations. In systematic review studies, unlike other review studies, qualitative assessment or risk of bias is required. There are currently several tools available to review the quality of the articles. The overall score of these tools may not provide sufficient information on the strengths and weaknesses of the studies.[ 28 ] At least two reviewers should independently evaluate the quality of the articles, and if there is any objection, the third author should be asked to examine the article or the two researchers agree on the discussion. Some believe that the study of the quality of studies should be done by removing the name of the journal, title, authors, and institutions in a Blinded fashion.[ 29 ]

There are several ways for quality assessment, such as Sack's quality assessment (1988),[ 30 ] overview quality assessment questionnaire (1991),[ 31 ] CASP (Critical Appraisal Skills Program),[ 32 ] and AMSTAR (2007),[ 33 ] Besides, CASP,[ 34 ] the National Institute for Health and Care Excellence,[ 35 ] and the Joanna Briggs Institute System for the Unified Management, Assessment and Review of Information checklists.[ 30 , 36 ] However, it is worth mentioning that there is no single tool for assessing the quality of all types of reviews, but each is more applicable to some types of reviews. Often, the STROBE tool is used to check the quality of articles. It reviews the title and abstract (item 1), introduction (items 2 and 3), implementation method (items 4–12), findings (items 13–17), discussion (Items 18–21), and funding (item 22). Eighteen items are used to review all articles, but four items (6, 12, 14, and 15) apply in certain situations.[ 9 ] The quality of interventional articles is often evaluated by the JADAD tool, which consists of three sections of randomization (2 scores), blinding (2 scores), and patient count (1 scores).[ 29 ]

Data extraction

At this stage, the researchers extract the necessary information in the selected articles. Elamin believes that reviewing the titles and abstracts and data extraction is a key step in the review process, which is often carried out by two of the research team independently, and ultimately, the results are compared.[ 37 ] This step aimed to prevent selection bias and it is recommended that the chance of agreement between the two researchers (Kappa coefficient) be reported at the end.[ 26 ] Although data collection forms may differ in systematic reviews, they all have information such as first author, year of publication, sample size, target community, region, and outcome. The purpose of data synthesis is to collect the findings of eligible studies, evaluate the strengths of the findings of the studies, and summarize the results. In data synthesis, we can use different analysis frameworks such as meta-ethnography, meta-analysis, or thematic synthesis.[ 38 ] Finally, after quality assessment, data analysis is conducted. The first step in this section is to provide a descriptive evaluation of each study and present the findings in a tabular form. Reviewing this table can determine how to combine and analyze various studies.[ 28 ] The data synthesis approach depends on the nature of the research question and the nature of the initial research studies.[ 39 ] After reviewing the bias and the abstract of the data, it is decided that the synthesis is carried out quantitatively or qualitatively. In case of conceptual heterogeneity (systematic differences in the study design, population, and interventions), the generalizability of the findings will be reduced and the study will not be meta-analysis. The meta-analysis study allows the estimation of the effect size, which is reported as the odds ratio, relative risk, hazard ratio, prevalence, correlation, sensitivity, specificity, and incidence with a confidence interval.[ 26 ]

Estimation of the effect size in systematic review and meta-analysis studies varies according to the type of studies entered into the analysis. Unlike the mean, prevalence, or incidence index, in odds ratio, relative risk, and hazard ratio, it is necessary to combine logarithm and logarithmic standard error of these statistics [ Table 1 ].

Effect size in systematic review and meta-analysis

Systematic review typePrimary studiesMeasures of interest
Prevalence systematic reviewCross-sectional studies
Descriptive studies
Prevalence
Mean,correlation
Observational systematic reviewCohort studies
Case-control studies
Analytical descriptive studies
OR
RR
mean difference
Standard mean difference
Clinical trials systematic reviewRCT
Non-RCT
RR
Risk difference
NNT, NNH
Mean difference
Diagnostic systematic reviewDiagnostic accuracy studiesSensitivity
Specificity
PPV, NPV
PLR, NLR
DOR

OR=Odds ratio; RR=Relative risk; RCT= Randomized controlled trial; PPV: positive predictive value; NPV: negative predictive value; PLR: positive likelihood ratio; NLR: negative likelihood ratio; DOR: diagnostic odds ratio

Interpreting and presenting results (answers to research questions)

A systematic review ends with the interpretation of results. At this stage, the results of the study are summarized and the conclusions are presented to improve clinical and therapeutic decision-making. A systematic review with or without meta-analysis provides the best evidence available in the hierarchy of evidence-based practice.[ 14 ] Using meta-analysis can provide explicit conclusions. Conceptually, meta-analysis is used to combine the results of two or more studies that are similar to the specific intervention and the similar outcomes. In meta-analysis, instead of the simple average of the results of various studies, the weighted average of studies is reported, meaning studies with larger sample sizes account for more weight. To combine the results of various studies, we can use two models of fixed and random effects. In the fixed-effect model, it is assumed that the parameters studied are constant in all studies, and in the random-effect model, the measured parameter is assumed to be distributed between the studies and each study has measured some of it. This model offers a more conservative estimate.[ 40 ]

Three types of homogeneity tests can be used: (1) forest plot, (2) Cochrane's Q test (Chi-squared), and (3) Higgins I 2 statistics. In the forest plot, more overlap between confidence intervals indicates more homogeneity. In the Q statistic, when the P value is less than 0.1, it indicates heterogeneity exists and a random-effect model should be used.[ 41 ] Various tests such as the I 2 index are used to determine heterogeneity, values between 0 and 100; the values below 25%, between 25% and 50%, and above 75% indicate low, moderate, and high levels of heterogeneity, respectively.[ 26 , 42 ] The results of the meta-analyzing study are presented graphically using the forest plot, which shows the statistical weight of each study with a 95% confidence interval and a standard error of the mean.[ 40 ]

The importance of meta-analyses and systematic reviews in providing evidence useful in making clinical and policy decisions is ever-increasing. Nevertheless, they are prone to publication bias that occurs when positive or significant results are preferred for publication.[ 43 ] Song maintains that studies reporting a certain direction of results or powerful correlations may be more likely to be published than the studies which do not.[ 44 ] In addition, when searching for meta-analyses, gray literature (e.g., dissertations, conference abstracts, or book chapters) and unpublished studies may be missed. Moreover, meta-analyses only based on published studies may exaggerate the estimates of effect sizes; as a result, patients may be exposed to harmful or ineffective treatment methods.[ 44 , 45 ] However, there are some tests that can help in detecting negative expected results that are not included in a review due to publication bias.[ 46 ] In addition, publication bias can be reduced through searching for data that are not published.

Systematic reviews and meta-analyses have certain advantages; some of the most important ones are as follows: examining differences in the findings of different studies, summarizing results from various studies, increased accuracy of estimating effects, increased statistical power, overcoming problems related to small sample sizes, resolving controversies from disagreeing studies, increased generalizability of results, determining the possible need for new studies, overcoming the limitations of narrative reviews, and making new hypotheses for further research.[ 47 , 48 ]

Despite the importance of systematic reviews, the author may face numerous problems in searching, screening, and synthesizing data during this process. A systematic review requires extensive access to databases and journals that can be costly for nonacademic researchers.[ 13 ] Also, in reviewing the inclusion and exclusion criteria, the inevitable mindsets of browsers may be involved and the criteria are interpreted differently from each other.[ 49 ] Lee refers to some disadvantages of these studies, the most significant ones are as follows: a research field cannot be summarized by one number, publication bias, heterogeneity, combining unrelated things, being vulnerable to subjectivity, failing to account for all confounders, comparing variables that are not comparable, just focusing on main effects, and possible inconsistency with results of randomized trials.[ 47 ] Different types of programs are available to perform meta-analysis. Some of the most commonly used statistical programs are general statistical packages, including SAS, SPSS, R, and Stata. Using flexible commands in these programs, meta-analyses can be easily run and the results can be readily plotted out. However, these statistical programs are often expensive. An alternative to using statistical packages is to use programs designed for meta-analysis, including Metawin, RevMan, and Comprehensive Meta-analysis. However, these programs may have limitations, including that they can accept few data formats and do not provide much opportunity to set the graphical display of findings. Another alternative is to use Microsoft Excel. Although it is not a free software, it is usually found in many computers.[ 20 , 50 ]

A systematic review study is a powerful and valuable tool for answering research questions, generating new hypotheses, and identifying areas where there is a lack of tangible knowledge. A systematic review study provides an excellent opportunity for researchers to improve critical assessment and evidence synthesis skills.

Authors' contributions

All authors contributed equally to this work.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • For authors
  • Browse by collection
  • BMJ Journals

You are here

  • Volume 14, Issue 9
  • Meta-analysis and systematic review of the diagnostic value of contrast-enhanced spectral mammography for the detection of breast cancer
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Jiulin Liu 1 , 2 ,
  • Ran Xiao 3 ,
  • Huijia Yin 1 ,
  • Ying Hu 1 ,
  • Siyu Zhen 1 ,
  • Shihao Zhou 1 , 2 ,
  • http://orcid.org/0000-0001-8516-1396 Dongming Han 1
  • 1 Department of Magnetic Resonance Imaging (MRI) , The First Affiliated Hospital of Xinxiang Medical University , Weihui , Henan , China
  • 2 Department of Radiology , Luoyang Orthopedic-Traumatological Hospital of Henan Province (Henan Provincial Orthopedic Hospital) , Zhengzhou , Henan , China
  • 3 Department of Respiratory Medicine , The First Affiliated Hospital of Xinxiang Medical University , Weihui , Henan , China
  • Correspondence to Dr Dongming Han; 625492590{at}qq.com

Objective The objective is to evaluate the diagnostic effectiveness of contrast-enhanced spectral mammography (CESM) in the diagnosis of breast cancer.

Data sources PubMed, Embase and Cochrane libraries up to 18 June 2022.

Eligibility criteria for selecting studies We included trials studies, compared the results of different researchers on CESM in the diagnosis of breast cancer, and calculated the diagnostic value of CESM for breast cancer.

Data extraction and synthesis Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) evaluated the methodological quality of all the included studies. The study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses specification. In addition to sensitivity and specificity, other important parameters were explored in an analysis of CESM accuracy for breast cancer diagnosis. For overall accuracy estimation, summary receiver operating characteristic curves were calculated. STATA V.14.0 was used for all analyses.

Results This meta-analysis included a total of 12 studies. According to the summary estimates for CESM in the diagnosis of breast cancer, the pooled sensitivity and specificity were 0.97 (95% CI 0.92 to 0.98) and 0.76 (95% CI 0.64 to 0.85), respectively. Positive likelihood ratio was 4.03 (95% CI 2.65 to 6.11), negative likelihood ratio was 0.05 (95% CI 0.02 to 0.09) and the diagnostic odds ratio was 89.49 (95% CI 45.78 to 174.92). Moreover, there was a 0.95 area under the curve.

Conclusions The CESM has high sensitivity and good specificity when it comes to evaluating breast cancer, particularly in women with dense breasts. Thus, provide more information for clinical diagnosis and treatment.

  • breast imaging
  • breast tumours
  • diagnostic radiology

Data availability statement

Data sharing not applicable as no datasets generated and/or analysed for this study.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/bmjopen-2022-069788

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

STRENGTHS AND LIMITATIONS OF THIS STUDY

This systematic review was a comprehensive search of experimental and observational studies on contrast-enhanced spectral mammography (CESM) in the diagnosis of breast cancer.

We included only prospective studies. Prospective studies were of higher quality with less bias, and our study screening criteria were developed prior to the meta-analysis.

The study was conducted by two people and was strictly based on inclusion criteria.

The data in this study were summarised using sound statistical methods.

A recent literature was added, and a literature from the same institution included only the most recent or the largest sample size.

We summarised the sensitivity and specificity of CESM in the diagnosis of breast cancer.

Introduction

Globally, female breast cancer has overtaken lung cancer as the leading cause of cancer death, making it the fifth most common cause of death. 1 From the mid-20th century, the incidence of breast cancer in women has been increasing slowly by about 0.5% per year. 2 At present, the diagnostic methods of breast cancer include MRI, full field digital mammography (FFDM) and ultrasound (US). MRI is the most sensitive examination in the diagnosis of breast cancer at present. 3 However, it has some disadvantages such as no claustrophobic and high price. In addition, although FFDM is an effective diagnostic method for breast cancer, it also has the hazard of recall and needs further testing. 4 Ultrasonography has good diagnostic efficacy for breast cancer, especially in women with dense breasts; however, it has a relatively low positive predictive value. 5 Contrast-enhanced spectral mammography (CESM), which visualises breast neovascularisation in a manner similar to MRI, is an emerging technology that uses iodine contrast agent. 6 CESM has the advantages of patient friendliness and low cost. Previous studies have shown that CESM has obvious advantages in displaying lesions compared with US. The advantage of CESM is that it can show changes in anatomy and local blood perfusion, which may be caused by tumour angiogenesis. 7 Moreover, CESM is useful in detecting the suspicious findings in routine breast imaging 7 and the sensitivity and specificity of CESM are different in different studies.

It has been reported that several meta-analyses have been conducted regarding the diagnostic performance of CESM for breast cancer; however, their pooled results were different and had several limitations. 8–11 On the one hand, the sensitivity and specificity differed across the above-mentioned meta-analyses. 8 10 11 On the other hand, the numbers of included studies were limited. In addition, partial meta-analyses included none-English studies and overlapped studies, which might affect their pooled results. In the past few years, several studies evaluating the diagnostic value of CESM in breast cancer have been published. Therefore, we conducted this meta-analysis using available evidence to comprehensively determine whether CESM is effective in detecting breast cancer in women.

Material and methods

As recommended by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), we conducted our study followed the PRISMA specification, 12 which met the requirements of diagnostic systematic review.

Search strategy

To evaluate the accuracy of CESM in diagnosing breast cancer, we retrieved the following databases: PubMed, Embase and Cochrane library. Two reviewers, JL and RX, independently searched the above databases up to the date of 18 June 2022. Our searching terms included ‘contrast-enhanced spectral mammography’, ‘Dual-Energy Contrast-Enhanced Spectral Mammography’, ‘CESM’, ‘contrast-enhanced digital mammography’, ‘CEDM’, ‘Breast Neoplasms’, ‘Breast Neoplasm’, ‘Breast Tumor’, ‘Breast Tumors’, ‘Breast Cancer’, ‘Malignant Neoplasm of Breast’, ‘Breast Malignant Neoplasm’, ‘Breast Carcinomas’, ‘Breast Carcinoma’, ‘breast mass’, ‘breast lesion’, ‘breast lesions’, ‘breast diseases’. In addition, the references of all the included studies were also reviewed.

Inclusion and exclusion criteria

Following is the list of inclusion criteria: (1) studies diagnosing breast cancer, (2) studies provided data on the sensitivity and specificity, (3) studies involving ≥10 patients or case, (4) English language and(5) prospective studies. Following is the list of exclusion criteria: (1) overlapped research, (2) commentaries, letters, editorials or abstracts or (3) studies referencing artificial intelligence and radiomics.

Study screening

The titles and abstracts of the literature in the electronic databases were initially screened by two authors, following the above criteria for inclusion and exclusion. Each of the two researchers screened two times to avoid omission. If there is any disagreement, the third author was consulted to decide. Eligibly downloaded full texts and further screened. First, if the authors and institutions of the study are the same, we will include the most recently published studies with the largest sample size. If the research institutions are the same, but the authors are different, we will send an email to the corresponding authors to ask. If we do not receive a reply, we will include the most recently published studies having the largest sample size.

Data abstraction

Two reviewers extracted data. If necessary, the difference shall be solved by the third reviewer. Each study was analysed for the following information: first author name, publication year, country, the numbers of patients and lesions, median age, the results of true positive (TP), false positive (FP), false negative (FN) and true negative (TN).

Quality assessment

The quality of the methodology included in the publication was assessed by the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2). 13 QUADAS-2 were mainly focused on the following four domains: patient selection, index test, reference standard and flow and timing, with minimal overlapping, which present the main quality of the diagnostic study. Each domain is assessed according to risk of bias, with the three domains assessed according to applicability. The risk of bias was considered low if the study met the above criteria and high otherwise. Disagreements between the two reviewers on quality assessment were resolved by consensus.

Statistical analysis

STATA V.14.0 was used for all analyses. I 2 measure was used to quantify the heterogeneity between studies. If there is no statistical heterogeneity, the fixed effect model is used to consolidate the data. On the contrary, the random effect model is used to summarise the data. The sensitivity was shown in the form TP/(TP+FN), where TP represents the number of true-positive results and FN represent the number of FN results. The specificity was shown in the form TN/(TN+FP), where TN represent the number of TN results and FP represent the number of FN results. 14 We also computed other significant measures on the evaluation of diagnostic experiments such as positive likelihood ratio (PLR) and negative likelihood ratio (NLR) and diagnostic OR (DOR). The summary receiver operating characteristic curve ROC (SROC) curve and the area under the curve (AUC) of the SROC curve were also computed.

Study characteristics

After a systematic search, we included 12 studies. 15–26 The complete selection process is in detail in PRISMA flowchart ( figure 1 ). From 544 screened studies, 85 studies were subjected to full text reading. The characteristics of all the 12 included studies are shown in table 1 . These 12 studies are all prospective studies published between 2014 and 2022. Most patients had US, mammography and related examinations before CESM examination. The dense breast we collected account for approximately two-thirds. In addition, the methodological quality assessment of all included studies was shown in online supplemental table 1 .

Supplemental material

  • View inline

Study characteristics of each included study

  • Download figure
  • Open in new tab
  • Download powerpoint

The figure shows the workflow for study screening and selection. CESM, contrast-enhanced spectral mammography.

Diagnostic accuracy of CESM

The sensitivity and specificity values were shown in Forest plots ( figure 2 ). A very high pooled test sensitivity of 0.97 (95% CI 0.92 to 0.98) was estimated. The pooled specificity was 0.76 (95% CI 0.64 to 0.85). The PLR was 4.03 (95% CI 2.65 to 6.11), NLR was 0.05 (95% CI 0.02 to 0.09) ( figure 3 ) and DOR was 89.49 (95% CI 45.78 to 174.92) ( online supplemental figure 1 ). I 2 values of sensitivity, specificity, PLR, NLR and DOR were 76.60%, 87.95%, 86.25%, 65.73% and 99.78%, respectively.

Forest plot of estimates of sensitivity and specificity for contrast-enhanced spectral mammography in the diagnosis of breast cancer.

Forest plot of estimates of positive likelihood ratio and negative likelihood ratio for contrast-enhanced spectral mammography in the diagnosis of breast cancer.

As shown in figure 4 , the SROC curve shows an AUC of 0.95 (0.93 to 0.97). CI is an interval estimation based on the average point estimation. The prediction interval is the interval estimation based on the individual value point estimation.

The plot shows the summary bivariate ROC curve for CESM diagnostic accuracy. AUC, area under the curve; CESM, contrast-enhanced spectral mammography; ROC, receiver operating characteristic curve; SENS, sensitivity; SPEC, specificity; SROC, summary receiver operating characteristic curve.

A confidence contour and a prediction contour were shown in the figure.

Fagan plots were drawn to understand the prior probability (current incidence) and the posterior probability (incidence estimated from this diagnostic experiment). In our sample, the pretest probability of malignancy was 50%, with a positive finding at CESM a post-test probability of 80% while a negative finding a post-test probability of 4% ( online supplemental figure 2 ).

Regression analysis

We analysed some covariates (number of lesions, number of patients, being dense breast or not, year of publication) possible influence on the diagnostic accuracy of CESM. The regression analysis showed that the sensitivity of the studies that only included dense breast was different from that of other studies, but both were high ( online supplemental figure 3 ). In addition, a limited number of studies were included, which reduced the reliability of the regression analysis.

Publication bias

A funnel plot drawn with Stata V.14.0 software was used to analyse the publication bias of the included studies ( online supplemental figure 4 ). The included studies were evenly distributed on both sides of the regression line, showing that the included literatures had no obvious publication bias (p=0.78).

CESM is emerging as a valuable tool for the diagnosis and staging of breast cancer. CESM combines the contrast enhancement effect caused by tumour neovascularisation with the information of anatomical changes. The lesions were highlighted by reciprocal subtraction of the images, which further increased the sensitivity of CESM for the diagnosis of breast cancer. It improves the accuracy in diagnosing breast cancer, providing more accurate tumour size and identification of multifocal disease, especially in patients with the dense type of breast. 27

Results showed that the pooled sensitivity (0.97, 95% CI 0.92 to 0.98) was higher and the pooled specificity (0.76, 95% CI 0.64 to 0.85) was slightly lower than a previous meta-analysis 9 which indicated a pooled sensitivity of 0.89 (95% CI 0.88 to 0.91) and a pooled specificity of 0.84 (95% CI 0.82 to 0.85). The reason for the high sensitivity may be that our study went through more rigorous study screening, included the latest literature, and CESM has been increasingly used in clinical practice in recent years. Another point is that all the studies we included are prospective studies, which are less susceptible to bias than retrospective studies. Another previous meta-analysis 8 has obtained that CESM has high sensitivity for the diagnosis of breast cancer, but it has low specificity. This may be due to the following reasons: three studies included by the meta-analysis were similar and written by the same first author; the meta-analysis only included eight studies and the pooled specificity were obtained by six literatures. All the reasons may result in some bias. However, during our screening, there are five studies from the same authors 15 28–31 and with similar results, we only included one in which the study type was prospective and with large sample size and longest time span.

In addition, compared with other studies, this study included the latest studies in recent years, and conducted a more rigorous article screening, with each of the two researchers screening two times.

The DOR is a common statistic in epidemiology that expresses the strength of the association between exposure and disease. 32 The diagnostic DOR for a test is the ratio of the odds of being positive in the disease to the odds of being positive in the non-disease. In our meta-analysis, the DOR was 89.49 (95% CI45.78 to 174.92), which was high. It indicated that if CESM showed a positive result, the probability of a true breast cancer being correctly diagnosed was 89.49 to 1. DOR offers considerable advantages in a meta-analysis of diagnostic studies by combining results from different studies into a more precise pooled estimate. The I 2 statistic, also known as the inconsistency index, is a measure of heterogeneity or variability across studies in a meta-analysis. It quantifies the proportion of total variation in effect estimates that is due to heterogeneity rather than chance. Differences in study populations: the studies included in the meta-analysis may have varied in terms of patient characteristics, such as age, mammary gland type, disease severity or comorbidities. These differences can contribute to heterogeneity in the estimated DOR. Clinical and contextual factors: heterogeneity in DOR can also arise from differences in the clinical context, such as variations in disease prevalence, healthcare settings or geographic locations.

The SROC curve method takes into account the possible heterogeneity of thresholds. 33 The SROC indicates the relationship between the TP rate and FP rate at different diagnostic thresholds. 34 In general, the AUC of a diagnostic method between 0.5 and 0.7 means low accuracy, 0.7 and 0.9 means good accuracy, above 0.9 high accuracy. The SROC curve shows an AUC of 0.95, indicating high accuracy.

The study of Hobbs et al 35 reminds of that patients’ preferences for CESM will provide further evidence supporting the adoption of CESM as an alternative to ce-MRI in selected clinical indications, if diagnostic non-inferiority of CESM is confirmed. Ferranti et al 25 suggested that CESM may provide compensation for MRI through a slight FN tendency. Furthermore, Clauser et al 36 thought the specificity of CESM is higher than that of MRI. CEM determines breast cancer based on tumour angiogenesis assessment. 24 Growth factors secreted by cancer cells promote the formation of new blood vessels during division and proliferate to tumour cells. It is because of the increased vascular endothelial cell gap and permeability that the contrast in the tumour area is enhanced. CESM may combine the high sensitivity of MRI with the low cost and availability of FFDM. 37

However, there are some limitations in the study. First, primary source participants were all patients with lesions diagnosed by breast US or mammography. This may induce a selection bias. Second, the majority of the main participants were with dense breast. This point, while highlighting the superiority of CESM over dense breast examination, may still be subject to some bias. Third, due to the excessive number of retrieved literatures, we only included prospective studies and studies writing in English. In this way, some reliable studies and results may be missed.

The CESM has high sensitivity and good specificity when it comes to evaluating breast cancer, particularly in women with dense breasts. Thus, provide more information for clinical diagnosis and treatment.

Ethics statements

Patient consent for publication.

Not applicable.

Ethics approval

  • Siegel RL , et al
  • Siegel RL ,
  • Miller KD ,
  • Fuchs HE , et al
  • Poortmans P ,
  • Morrow M , et al
  • Lobbes MBI ,
  • Jochelson MS ,
  • Tagliafico AS ,
  • Bignotti B ,
  • Rossi F , et al
  • Huang J-M ,
  • Zhang K , et al
  • Pesapane F ,
  • Agazzi GM , et al
  • McInnes MDF ,
  • Thombs BD , et al
  • Whiting PF ,
  • Rutjes AWS ,
  • Westwood ME , et al
  • Psoter KJ ,
  • Roudsari BS ,
  • Dighe MK , et al
  • Luczyńska E ,
  • Heinze-Paluchowska S ,
  • Dyczek S , et al
  • Mokhtar O ,
  • Wang L , et al
  • Akashi-Tanaka S ,
  • Suzuki S , et al
  • Sun B , et al
  • El Ghany EA
  • Pan Y , et al
  • Petrillo A ,
  • Vallone P , et al
  • Mohamed SAS ,
  • Moftah SG ,
  • Chalabi NAEM , et al
  • Sannapareddy K ,
  • Potlapalli A , et al
  • Ferranti FR ,
  • Vasselli F ,
  • Barba M , et al
  • Abu Samra MF ,
  • Ibraheem MA , et al
  • Adamczyk A , et al
  • Łuczyńska E ,
  • Hendrick E , et al
  • Luczynska E ,
  • Niemiec J ,
  • Ambicka A , et al
  • Lijmer JG ,
  • Prins MH , et al
  • Takwoingi Y ,
  • Riley RD , et al
  • Taylor DB ,
  • Buzynski S , et al
  • Clauser P ,
  • Baltzer PAT ,
  • Kapetas P , et al
  • Bicchierai G ,
  • Tonelli P ,
  • Piacenti A , et al

Contributors JL and RX designed the study. SZou and YH gathered data. JL and SZhen performed the analysis. HY and DH revised it critically for important intellectual content. DH acted as guarantor. All authors contributed to the article and approved the submitted version.

Funding This work has received funding by the Henan Medical Science and Technology Research Program (LHGJ20210498,LHGJ20230528).

Competing interests None declared.

Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

A systematic review of the timing of therapeutic anticoagulation in adult patients with acute traumatic brain injury: narrative synthesis of observational studies

  • Published: 05 September 2024
  • Volume 47 , article number  538 , ( 2024 )

Cite this article

research synthesis systematic review

  • Sophie Samuel   ORCID: orcid.org/0000-0003-2544-900X 1 ,
  • Jennifer Cortes 1 ,
  • Eugene Uh 2 &
  • Huimahn Alex Choi 2  

Traumatic brain injury (TBI) presents complex management scenarios, particularly in patients requiring anticoagulation for concurrent conditions such as venous thromboembolism (VTE) or atrial fibrillation (AF). A systematic search of PubMed/MEDLINE, Embase, and the Cochrane Library databases was conducted to identify relevant studies. Inclusion criteria encompassed studies assessing the effects of anticoagulation therapy on outcomes such as re-hemorrhage, hematoma expansion, thrombotic events, and hemorrhagic events in TBI patients with subdural hematomas (SDH). This systematic review critically addresses two key questions: the optimal timing for initiating anticoagulation therapy and the differential impact of this timing based on the type of intracranial bleed, with a specific focus on subdural hematomas (SDH) compared to other types. Initially screening 508 articles, 7 studies met inclusion criteria, which varied in design and quality, precluding meta-analysis. The review highlights a significant knowledge gap, underscoring the lack of consensus on when to initiate anticoagulation therapy in TBI patients, exacerbated by the need for anticoagulation in the presence of VTE or AF. Early anticoagulation, particularly in patients with SDH, may elevate the risk of re-hemorrhage, posing a clinical dilemma. Evidence on whether the type of intracranial hemorrhage influences outcomes with early anticoagulation remains inconclusive, indicating a need for further research to tailor management strategies effectively. This review underscores the scarcity of high-quality evidence regarding anticoagulation therapy in TBI patients with concurrent conditions, emphasizing the necessity for well-designed prospective studies to elucidate optimal management strategies for this complex patient population.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research synthesis systematic review

Data availability

No datasets were generated or analysed during the current study.

Global, regional, and national burden of traumatic brain injury and spinal cord injury, 1990-2016: a systematic analysis for the Global Burden of Disease Study 2016. (2018) [cited 2023 Aug 12];  https://doi.org/10.1016/S1474-4422(18)30415-0

Carney N, Totten AM, O’Reilly C, Ullman JS, Hawryluk GWJ, Bell MJ, et al. (2017) Guidelines for the Management of Severe Traumatic Brain Injury, Fourth Edition. Neurosurgery. [cited 2023 Jun 15];80(1):6–15.  https://journals.lww.com/neurosurgery/Fulltext/2017/01000/Guidelines_for_the_Management_of_Severe_Traumatic.3.aspx

Yuan F, Ding J, Chen H, Guo Y, Wang G, Gao WW, et al. (2012) Predicting progressive hemorrhagic injury after traumatic brain injury: derivation and validation of a risk score based on admission characteristics. J Neurotrauma. [cited 2024 Mar 17];29(12):2137–42. https://pubmed.ncbi.nlm.nih.gov/22568757/

Delayed and Progressive Brain Injury in Closed-Head Trauma: Neurosurgery. [cited 2024 Mar 17].  https://journals.lww.com/neurosurgery/abstract/1993/01000/delayed_and_progressive_brain_injury_in.4.aspx

Chang EF, Meeker M, Holland MC (2006) Acute traumatic intraparenchymal hemorrhage: risk factors for progression in the early post-injury period. Neurosurgery. [cited 2024 Mar 16];58(4):647–55.  https://pubmed.ncbi.nlm.nih.gov/16575328/

Chieregato A, Fainardi E, Maria Morselli-Labate A, Antonelli V, Compagnone C, Targa L, et al. (2005) Factors associated with neurological outcome and lesion progression in traumatic subarachnoid hemorrhage patients. Neurosurgery. [cited 2024 Mar 16];56(4):671–9. https://pubmed.ncbi.nlm.nih.gov/15792505/

Engström M, Romner B, Schalén W, Reinstrup P (2005) Thrombocytopenia predicts progressive hemorrhage after head trauma. J Neurotrauma. [cited 2024 Mar 16];22(2):291–6.  https://pubmed.ncbi.nlm.nih.gov/15716634/

Narayan RK, Maas AIR, Servadei F, Skolnick BE, Tillinger MN, Marshall LF, et al. (2008) Progression of traumatic intracerebral hemorrhage: a prospective observational study. J Neurotrauma. [cited 2024 Mar 16];25(6):629–39.  https://pubmed.ncbi.nlm.nih.gov/18491950/

Oertel M, Kelly DF, McArthur D, John Boscardin W, Glenn TC, Jae HL, et al. (2002) Progressive hemorrhage after head trauma: predictors and consequences of the evolving injury. J Neurosurg. [cited 2024 Mar 16];96(1):109–16.  https://pubmed.ncbi.nlm.nih.gov/11794591/

Schnüriger B, Inaba K, Abdelsayed GA, Lustenberger T, Eberle BM, Barmparas G et al (2010) The impact of platelets on the progression of traumatic intracranial hemorrhage. J Trauma - Injury Infect Critic Care 68(4):881–885

Article   Google Scholar  

Servadei F, Murray GD, Penny K, Teasdale GM, Dearden M, Iannotti F et al. (2000) The Value of the ???Worst??? Computed Tomographic Scan in Clinical Studies of Moderate and Severe Head Injury. Neurosurgery. 70–7

Gardner RC, Dams-O’Connor K, Morrissey MR, Manley GT (2018) Geriatric traumatic brain injury: Epidemiology, outcomes, knowledge gaps, and future directions. Vol. 35, Journal of Neurotrauma. Mary Ann Liebert Inc. p. 889–906

Ramanathan DM, McWilliams N, Schatz P, Hillary FG (2012) Epidemiological shifts in elderly traumatic brain injury: 18-year trends in Pennsylvania. J Neurotrauma 29(7):1371–1378

Article   PubMed   Google Scholar  

Coronado VG, Thomas KE, Sattin RW, Johnson RL (2005) The CDC traumatic brain injury surveillance system: characteristics of persons aged 65 years and older hospitalized with a TBI. J Head Trauma Rehabil. [cited 2023 Jun 13];20(3):215–28. Available from: https://pubmed.ncbi.nlm.nih.gov/15908822/

Chan V, Zagorski B, Parsons D, Colantonio A (2013) Older adults with acquired brain injury: a population based study. BMC Geriatr. [cited 2023 Jun 14];13(1). https://pubmed.ncbi.nlm.nih.gov/24060144/

De Guise E, Leblanc J, Dagher J, Tinawi S, Lamoureux J, Marcoux J, et al. (2014) Trends in Hospitalization Associated with TBI in an Urban Level 1 Trauma Centre. Can J Neurol Sci. [cited 2023 Jun 14];41(4):466–75. Available from: https://pubmed.ncbi.nlm.nih.gov/24878471/

Korhonen N, Niemi S, Parkkari J, Sievan̈en H, Kannus P (2013) Incidence of fall-related traumatic brain injuries among older Finnish adults between 1970 and 2011. JAMA. [cited 2024 Mar 16];309(18):1891–2.  https://pubmed.ncbi.nlm.nih.gov/23652518/

Brazinova A, Mauritz W, Majdan M, Rehorcikova V, Leitgeb J (2015) Fatal traumatic brain injury in older adults in Austria 1980–2012: an analysis of 33 years. Age Ageing. [cited 2024 Mar 16];44(3):502–6.  https://pubmed.ncbi.nlm.nih.gov/25520311/

Hamill V, Barry SJE, McConnachie A, McMillan TM, Teasdale GM (2015) Mortality from Head Injury over Four Decades in Scotland. J Neurotrauma. [cited 2024 Mar 16];32(10):689–703.  https://pubmed.ncbi.nlm.nih.gov/25335097/

Haring RS, Narang K, Canner JK, Asemota AO, George BP, Selvarajah S et al. (2015) Traumatic brain injury in the elderly: morbidity and mortality trends and risk factors. J Surg Res. [cited 2024 Mar 16];195(1):1–9.  https://pubmed.ncbi.nlm.nih.gov/25724764/

ACS TQIP Geriatric Trauma Management Guidelines. 2013

Kumar RG, Juengst SB, Wang Z, Dams-O’Connor K, Dikmen SS, O’Neil-Pirozzi TM et al. (2018) Epidemiology of Comorbid Conditions Among Adults 50 Years and Older With Traumatic Brain Injury. J Head Trauma Rehabil. [cited 2023 Jun 12];33(1):15–24.  https://pubmed.ncbi.nlm.nih.gov/28060201/

Hawley C, Sakr M, Scapinello S, Salvo J, Wrenn P (2017) Traumatic brain injuries in older adults - 6 years of data for one UK trauma centre: Retrospective analysis of prospectively collected data. Emerg Med J 34(8):509–516

Chauhan AV, Guralnik J, Dosreis S, Sorkin JD, Badjatia N, Albrecht JS (2022) Repetitive Traumatic Brain Injury Among Older Adults. J Head Trauma Rehabil. [cited 2023 Jun 12];37(4):E242–8.  https://pubmed.ncbi.nlm.nih.gov/34320558/

Lavoie A, Ratte S, Clas D, Demers J, Moore L, Martin M et al. (2004) Preinjury warfarin use among elderly patients with closed head injuries in a trauma center. J Trauma. [cited 2023 Jun 14];56(4):802–7.  https://pubmed.ncbi.nlm.nih.gov/15187746/

Samuel S, Menchaca C, Gusdon AM (2022) Timing of anticoagulation for venous thromboembolism after recent traumatic and vascular brain Injury. J Thromb Thrombolysis

Iaccarino C, Carretta A, Demetriades AK, Di Minno G, Giussani C, Marcucci R et al. (2023) Management of Antithrombotic Drugs in Patients with Isolated Traumatic Brain Injury: An Intersociety Consensus Document. Neurocrit Care

Hawryluk GWJ, Austin JW, Furlan JC, Lee JB, O’Kelly C, Fehlings MG (2010) Management of anticoagulation following central nervous system hemorrhage in patients with high thromboembolic risk. J Thromb Haemost 8(7):1500–1508

Article   CAS   PubMed   Google Scholar  

Geerts WH, Code KI, Jay RM, Chen E, Szalai JP (1994) A prospective study of venous thromboembolism after major trauma. N Engl J Med. [cited 2023 May 28];331(24):1601–6.  https://pubmed.ncbi.nlm.nih.gov/7969340/

Divito A, Kerr K, Wilkerson C, Shepard S, Choi A, Kitagawa RS (2019) Use of Anticoagulation Agents After Traumatic Intracranial Hemorrhage. World Neurosurg 1(123):e25-30

Connolly SJ, Crowther M, Eikelboom JW, Gibson CM, Curnutte JT, Lawrence JH et al (2019) Full Study Report of Andexanet Alfa for Bleeding Associated with Factor Xa Inhibitors. N Engl J Med 380(14):1326–1335

Article   CAS   PubMed   PubMed Central   Google Scholar  

Staerk L, Fosbøl EL, Lamberts M, Bonde AN, Gadsbøll K, Sindet-Pedersen C, et al. (2018) Resumption of oral anticoagulation following traumatic injury and risk of stroke and bleeding in patients with atrial fibrillation: a nationwide cohort study. Eur Heart J. [cited 2023 May 28];39(19):1698–705.   https://pubmed.ncbi.nlm.nih.gov/29165556/

Xu Y, Shoamanesh A, Schulman S, Dowlatshahi D, Al- R, Salman S et al. (2018) Oral anticoagulant re-initiation following intracerebral hemorrhage in non-valvular atrial fibrillation: Global survey of the practices of neurologists, neurosurgeons and thrombosis experts. [cited 2023 Jun 16];   https://doi.org/10.1371/journal.pone.0191137

Hawryluk GWJ, Furlan JC, Austin JW, Fehlings MG (2011) Survey of neurosurgical management of central nervous system hemorrhage in patients receiving anticoagulation therapy: current practice is highly variable and may be suboptimal. World Neurosurg. [cited 2023 May 28];76(3–4):299–303.   https://pubmed.ncbi.nlm.nih.gov/21986428/

Gregson BA, Mendelow AD (2003) International variations in surgical practice for spontaneous intracerebral hemorrhage. Stroke (1970). (11):2593–7

Siddique MS, Gregson BA, Fernandes HM, Barnes J, Treadwell L, Wooldridge TD et al (2002) Comparative study of traumatic and spontaneous intracerebral hemorrhage. J Neurosurg 1:86–89

Park YA, Uhm JS, Pak HN, Lee MH, Joung B (2016) Anticoagulation therapy in atrial fibrillation after intracranial hemorrhage. Heart Rhythm. [cited 2023 Jun 15];13(9):1794–802.  https://pubmed.ncbi.nlm.nih.gov/27554947/

Byrnes MC, Irwin E, Roach R, James M, Horst PK, Reicks P (2012) Therapeutic anticoagulation can be safely accomplished in selected patients with traumatic intracranial hemorrhage. World J Emerg Surg. [cited 2023 Jun 20];7(1).  https://pubmed.ncbi.nlm.nih.gov/22824193/

Milling TJ, Refaai MA, Goldstein JN, Schneider A, Omert L, Harman A et al. (2016) Thromboembolic Events After Vitamin K Antagonist Reversal With 4-Factor Prothrombin Complex Concentrate: Exploratory Analyses of Two Randomized, Plasma-Controlled Studies. Ann Emerg Med. [cited 2023 May 28];67(1):96–105.e5.   https://pubmed.ncbi.nlm.nih.gov/26094105/

Nielsen PB, Larsen TB, Skjøth F, Lip GYH (2017) Outcomes associated with resuming warfarin treatment after hemorrhagic stroke or traumatic intracranial hemorrhage in patients with atrial fibrillation. JAMA Intern Med 177(4):563–570

Article   PubMed   PubMed Central   Google Scholar  

McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C (2016) PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement. J Clin Epidemiol. [cited 2024 Mar 16];75:40–6.  https://pubmed.ncbi.nlm.nih.gov/27005575/

CASP Checklists - Critical Appraisal Skills Programme. [cited 2024 Mar 16].  https://casp-uk.net/casp-tools-checklists/

Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, et al. (2011) GRADE guidelines: 1. Introduction - GRADE evidence profiles and summary of findings tables. J Clin Epidemiol. 64(4):383–94

Balshem H, Helfand M, Schünemann HJ, Oxman AD, Kunz R, Brozek J, et al. (2011) GRADE guidelines: 3. Rating the quality of evidence. J Clin Epidemiol. [cited 2024 Mar 16];64(4):401–6. https://pubmed.ncbi.nlm.nih.gov/21208779/

Guyatt GH, Oxman AD, Vist G, Kunz R, Brozek J, Alonso-Coello P et al. (2011) GRADE guidelines: 4. Rating the quality of evidence--study limitations (risk of bias). J Clin Epidemiol. [cited 2024 Mar 16];64(4):407–15.  https://pubmed.ncbi.nlm.nih.gov/21247734/

Guyatt GH, Oxman AD, Kunz R, Woodcock J, Brozek J, Helfand M et al. (2011) GRADE guidelines: 7. Rating the quality of evidence--inconsistency. J Clin Epidemiol. [cited 2024 Mar 16];64(12):1294–302.  https://pubmed.ncbi.nlm.nih.gov/21803546/

Guyatt GH, Oxman AD, Kunz R, Woodcock J, Brozek J, Helfand M et al. (2011) GRADE guidelines: 8. Rating the quality of evidence--indirectness. J Clin Epidemiol. [cited 2024 Mar 16];64(12):1303–10.  https://pubmed.ncbi.nlm.nih.gov/21802903/

Guyatt GH, Oxman AD, Kunz R, Brozek J, Alonso-Coello P, Rind D et al. (2011) GRADE guidelines 6. Rating the quality of evidence--imprecision. J Clin Epidemiol. [cited 2024 Mar 16];64(12):1283–93.  https://pubmed.ncbi.nlm.nih.gov/21839614/

Kia M, Saluja RS, Marcoux J (2023) Acute Traumatic Subdural Hematoma and Anticoagulation Risk. Can J Neurol Sci 50(2):188–193

Pandya U, Pattison J, Karas C, O’mara M (2018) Does the Presence of Subdural Hemorrhage Increase the Risk of Intracranial Hemorrhage Expansion after the Initiation of Antithrombotic Medication? Am Surg. 84(3):416–421

Matsushima K, Leichtle SW, Wild J, Young K, Chang G, Demetriades D et al. (2021) Anticoagulation therapy in patients with traumatic brain injury: An Eastern Association for the Surgery of Trauma multicenter prospective study. In: Surgery (United States). Mosby Inc. 470–6

Chipman AM, Radowsky J, Vesselinov R, Chow D, Schwartzbauer G, Tesoriero R et al (2020) Therapeutic anticoagulation in patients with traumatic brain injuries and pulmonary emboli. J Trauma Acute Care Surg 89(3):529–535

Ghenbot Y, Arena JD, Howard S, Wathen C, Kumar MA, Schuster JM (2023) Anticoagulation Holiday: Resumption of Direct Oral Anticoagulants for Atrial Fibrillation in Patients with Index Traumatic Intracranial Hemorrhage. World Neurosurg X 1:17

Google Scholar  

Albrecht JS, Liu X, Baumgarten M, Langenberg P, Rattinger GB, Smith GS et al (2014) Benefits and risks of anticoagulation resumption following traumatic brain injury. JAMA Intern Med 174(8):1244–1251

Powers WJ (2010) Intracerebral hemorrhage and head trauma: common effects and common mechanisms of injury. Stroke 41(10 Suppl):S107–S110. https://doi.org/10.1161/STROKEAHA.110.595058

Cavallari LH, Shin J, Perera MA (2011) Role of pharmacogenomics in the management of traditional and novel oral anticoagulants. Pharmacotherapy 31(12):1192–1207. https://doi.org/10.1592/phco.31.12.1192

Download references

Acknowledgements

We would like to thank librarians Sonya Fogg, MLS and Celeste Perez, MLS, from the TMC Library, 1133 John Freeman Blvd., Houston, TX 77030, for their invaluable assistance with the literature search and data acquisition for this systematic review.

Not applicable.

Author information

Authors and affiliations.

Memorial Hermann-Texas Medical Center, 6411 Fannin Street, Houston, TX, 77030, USA

Sophie Samuel & Jennifer Cortes

McGovern Medical School at UT Health, University of Texas, 6431 Fannin Street, Houston, TX, 77030, USA

Eugene Uh & Huimahn Alex Choi

You can also search for this author in PubMed   Google Scholar

Contributions

S.S. conceptualized the study and formulated the research questions. S.S. conducted the initial review of all 508 articles, while J.C. and E.U. each reviewed 254 articles. Discrepancies were resolved by HAC. S.S., J.C., and E.U. used the JBI Critical Appraisal Checklist for Cohort Studies for data collection, screening, and appraisal. All authors contributed to writing and reviewing the manuscript.

Corresponding author

Correspondence to Sophie Samuel .

Ethics declarations

Ethical approval, conflict of interest.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Samuel, S., Cortes, J., Uh, E. et al. A systematic review of the timing of therapeutic anticoagulation in adult patients with acute traumatic brain injury: narrative synthesis of observational studies. Neurosurg Rev 47 , 538 (2024). https://doi.org/10.1007/s10143-024-02717-1

Download citation

Received : 22 June 2024

Revised : 09 August 2024

Accepted : 18 August 2024

Published : 05 September 2024

DOI : https://doi.org/10.1007/s10143-024-02717-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Timing of Anticoagulation
  • Venous Thromboembolism
  • Atrial Fibrillation
  • Delayed Anticoagulation
  • Early Anticoagulation
  • Traumatic Brain Injury
  • Subdural Hematoma
  • Bleeding Complications
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. systematic literature review steps

    research synthesis systematic review

  2. How to do a systematic review

    research synthesis systematic review

  3. The importance of meta-analysis and systematic review: How research

    research synthesis systematic review

  4. PPT

    research synthesis systematic review

  5. Methods in Research Synthesis: Systematic Reviews and Meta-Analysis

    research synthesis systematic review

  6. Systematic literature review phases.

    research synthesis systematic review

VIDEO

  1. Statistical Procedure in Meta-Essentials

  2. QUALITATIVE SYNTHESIS OF SYSTEMATIC REVIEWS

  3. Introduction to Evidence Synthesis

  4. Lecture Designing Organic Syntheses 7 Prof G Dyker 291014

  5. Lecture 8

  6. Ace the Systematic Literature Review!

COMMENTS

  1. Research Guides: Systematic Reviews &amp; Evidence Synthesis Methods

    Evidence syntheses are much more time-intensive than traditional literature reviews and require a multi-person research team. See this PredicTER tool to get a sense of a systematic review timeline (one type of evidence synthesis). Before embarking on an evidence synthesis, it's important to clearly identify your reasons for conducting one.

  2. Guidance to best tools and practices for systematic reviews

    Qualitative systematic review: Qualitative synthesis: Synthesis of qualitative data a: Qualitative synthesis: Synthesis without meta-analysis ... Tetzlaff J, Sampson M, Tricco AC, et al. Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Med. 2016; 13 (5):1-31. doi: 10.1371 ...

  3. An overview of methodological approaches in systematic reviews

    1. INTRODUCTION. Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search ...

  4. Systematic reviews: Structure, form and content

    As well as synthesis of these studies' findings, there should be an element of evaluation and quality assessment. ... 2015) - although a systematic review may be an inappropriate or unnecessary research methodology for answering many research questions. Systematic reviews can be inadvisable for a variety of reasons. It may be that the topic ...

  5. How to Do a Systematic Review: A Best Practice Guide for ...

    The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information.

  6. Guidelines for writing a systematic review

    A Systematic Review (SR) is a synthesis of evidence that is identified and critically appraised to understand a specific topic. SRs are more comprehensive than a Literature Review, which most academics will be familiar with, as they follow a methodical process to identify and analyse existing literature (Cochrane, 2022).

  7. Introduction to Systematic Reviews

    A systematic review identifies and synthesizes all relevant studies that fit prespecified criteria to answer a research question (Lasserson et al. 2019; IOM 2011).What sets a systematic review apart from a narrative review is that it follows consistent, rigorous, and transparent methods established in a protocol in order to minimize bias and errors.

  8. Research Synthesis Methods

    Research Synthesis Methods journal enables cross-fertilization across all scientific disciplines, publishing advances in the practices and methodologies for conducting research syntheses and systematic reviews. Spanning numerous disciplines, including health and social sciences, the journal allows researchers to learn, interact with and use one ...

  9. Narrative reanalysis: A methodological framework for a new brand of reviews

    Knowledge synthesis provides comprehensive overviews and interpretations of research data. Systematic reviews are highly methodological, whereas narrative reviews allow for a broader interpretation but lack methodological rigor. ... The methods section of a knowledge synthesis or review serves a critical function in ensuring the transparency ...

  10. PDF Checklist for Systematic Reviews and Research Syntheses

    JBI Systematic Reviews The core of evidence synthesis is the systematic review of literature of a particular intervention, condition or issue. The systematic review is essentially an analysis of the available literature (that is, evidence) and a judgment of the effectiveness or otherwise of a practice, involving a series of complex steps.

  11. How to Do a Systematic Review: A Best Practice Guide ...

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to ...

  12. Systematic Reviews & Evidence Synthesis Methods

    In a systematic review, researchers do more than summarize findings from identified articles. You will synthesize the information you want to include. While a summary is a way of concisely relating important themes and elements from a larger work or works in a condensed form, a synthesis takes the information from a variety of works and ...

  13. Evidence Synthesis & Systematic Reviews

    Synthesis of existing research: Conclusions are more qualitative and may not be based on study quality. ... Types of evidence synthesis include: Systematic Review. Systematically and transparently collect and categorize existing evidence on a broad question of scientific, policy or management importance. ...

  14. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...

  15. Synthesis and systematic maps

    Synthesis is the process of combining the findings of research studies. A synthesis is also the product and output of the combined studies. This output may be a written narrative, a table, or graphical plots, including statistical meta-analysis. ... If a systematic review question is about the effectiveness of an intervention, then the included ...

  16. Overview

    Systematic Review: Comprehensive literature synthesis on a specific research question, typically requires a team: Systematic; exhaustive and comprehensive; search of all available evidence: Yes: Yes: Narrative and tables, describes what is known and unknown, recommendations for future research, limitations of findings:

  17. Synthesise

    In a qualitative systematic review, data can be presented in a number of different ways. A typical procedure in the health sciences is thematic analysis. As explained by James Thomas and Angela Harden (2008) in an article for BMC Medical Research Methodology: "Thematic synthesis has three stages: the coding of text 'line-by-line'

  18. Steps in a Review

    Formulating a research question is key to a systematic review. It will be the foundation upon which the rest of the research is built. At this stage in the process, you will have identified a knowledge gap in your field, and you are aiming to answer a specific question. ... Beyond PICO: The SPIDER tool for qualitative evidence synthesis ...

  19. What Synthesis Methodology Should I Use? A Review and Analysis of

    The first is a well-developed research question that gives direction to the synthesis (e.g., meta-analysis, systematic review, meta-study, concept analysis, rapid review, realist synthesis). The second begins as a broad general question that evolves and becomes more refined over the course of the synthesis (e.g., meta-ethnography, scoping ...

  20. Qualitative Evidence Synthesis: Where Are We at?

    The term QES is used, and is the preferred term of the Cochrane Qualitative and Implementation Methods Group, as it acknowledges that qualitative research requires its own methods for synthesis which reflects the nature of the qualitative paradigm, rather than simply using the same methods devised for systematic reviews of quantitative research (Booth et al., 2016).

  21. _KINS5594: Guide to Systematic Searching for Evidence Synthesis Projects

    Create your own concept table for your research question and complete the keyword brainstorming section . Move on to the next page, where you'll learn about the specific controlled vocabulary PubMed uses and how to locate relevant terms within it ... In an evidence synthesis project like a systematic review, you must document: Where you ...

  22. _KINS5594: Guide to Systematic Searching for Evidence Synthesis Projects

    Rayyan is a user-friendly tool which enables a single person or a team to perform masked screening of references for evidence synthesis projects. It has some excellent features, especially if you're working with a large set of results. Rayyan is designed for screening, not for citation management or citing while writing!

  23. Evidence Synthesis for Librarians and Information Specialists

    This course brings the core components of the Evidence Synthesis Institute to a self-paced open online course. It contains 15 modules guiding learners through the ES process from an introduction to review types through writing a methods section for publication, with an emphasis on developing and using systematic search strategies. Development of this

  24. Physicians' perspectives on clinical indicators: systematic review and

    This thematic synthesis of data identified from a systematic review of the literature was focused on physicians' views regarding the utility of clinical indicators in practice. This is important to understand given the increasing use of clinical indicators and expectations that physicians will use and act on clinical indicator data.

  25. Methods for the thematic synthesis of qualitative research in

    The systematic review is an important technology for the evidence-informed policy and practice movement, which aims to bring research closer to decision-making [1, 2].This type of review uses rigorous and explicit methods to bring together the results of primary research in order to provide reliable answers to particular questions [3-6].The picture that is presented aims to be distorted ...

  26. Children and young people's experiences of living with developmental

    Booth, A. (2016). Searching for qualitative research for inclusion in systematic reviews: a structured methodological review. Systematic reviews. 2016; 5(1):74. ... J., & Copley, J. (2015). The meaning of leisure for children and young people with physical disabilities: A systematic evidence synthesis. Developmental Medicine & Child Neurology ...

  27. Use of social network analysis in health research: a scoping review

    Scoping reviews are a knowledge synthesis approach that aims to uncover the volume, range, reach and coverage of a body of literature on a specific topic.49 They differ from systematic reviews, another type of knowledge synthesis, in their objectives. Systematic reviews seek to answer clinical or epidemiological questions and are conducted to ...

  28. How to Write a Systematic Review: A Narrative Review

    In this study, the steps for a systematic review such as research question design and identification, the search for qualified published studies, the extraction and synthesis of information that pertain to the research question, and interpretation of the results are presented in details. This will be helpful to all interested researchers.

  29. Meta-analysis and systematic review of the diagnostic value of contrast

    Data extraction and synthesis Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) evaluated the methodological quality of all the included studies. The study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses specification. In addition to sensitivity and specificity, other important parameters were explored in an analysis of CESM accuracy for breast ...

  30. A systematic review of the timing of therapeutic ...

    Eligibility criteria. The eligibility criteria were defined to ensure relevance and robustness in addressing the research questions. For a comprehensive overview of these criteria, refer to Table 1.Criteria encompassed various study designs, including meta-analyses, systematic reviews, randomized controlled trials, prospective and retrospective cohort studies, case-control studies, and ...