Methodological Approaches to Literature Review
- Living reference work entry
- First Online: 09 May 2023
- Cite this living reference work entry
- Dennis Thomas 2 ,
- Elida Zairina 3 &
- Johnson George 4
878 Accesses
The literature review can serve various functions in the contexts of education and research. It aids in identifying knowledge gaps, informing research methodology, and developing a theoretical framework during the planning stages of a research study or project, as well as reporting of review findings in the context of the existing literature. This chapter discusses the methodological approaches to conducting a literature review and offers an overview of different types of reviews. There are various types of reviews, including narrative reviews, scoping reviews, and systematic reviews with reporting strategies such as meta-analysis and meta-synthesis. Review authors should consider the scope of the literature review when selecting a type and method. Being focused is essential for a successful review; however, this must be balanced against the relevance of the review to a broad audience.
This is a preview of subscription content, log in via an institution to check access.
Access this chapter
Institutional subscriptions
Similar content being viewed by others
Reviewing Literature for and as Research
Discussion and Conclusion
Systematic Reviews in Educational Research: Methodology, Perspectives and Application
Akobeng AK. Principles of evidence based medicine. Arch Dis Child. 2005;90(8):837–40.
Article CAS PubMed PubMed Central Google Scholar
Alharbi A, Stevenson M. Refining Boolean queries to identify relevant studies for systematic review updates. J Am Med Inform Assoc. 2020;27(11):1658–66.
Article PubMed PubMed Central Google Scholar
Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.
Article Google Scholar
Aromataris E MZE. JBI manual for evidence synthesis. 2020.
Google Scholar
Aromataris E, Pearson A. The systematic review: an overview. Am J Nurs. 2014;114(3):53–8.
Article PubMed Google Scholar
Aromataris E, Riitano D. Constructing a search strategy and searching for evidence. A guide to the literature search for a systematic review. Am J Nurs. 2014;114(5):49–56.
Babineau J. Product review: covidence (systematic review software). J Canad Health Libr Assoc Canada. 2014;35(2):68–71.
Baker JD. The purpose, process, and methods of writing a literature review. AORN J. 2016;103(3):265–9.
Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010;7(9):e1000326.
Bramer WM, Rethlefsen ML, Kleijnen J, Franco OH. Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study. Syst Rev. 2017;6(1):1–12.
Brown D. A review of the PubMed PICO tool: using evidence-based practice in health education. Health Promot Pract. 2020;21(4):496–8.
Cargo M, Harris J, Pantoja T, et al. Cochrane qualitative and implementation methods group guidance series – paper 4: methods for assessing evidence on intervention implementation. J Clin Epidemiol. 2018;97:59–69.
Cook DJ, Mulrow CD, Haynes RB. Systematic reviews: synthesis of best evidence for clinical decisions. Ann Intern Med. 1997;126(5):376–80.
Article CAS PubMed Google Scholar
Counsell C. Formulating questions and locating primary studies for inclusion in systematic reviews. Ann Intern Med. 1997;127(5):380–7.
Cummings SR, Browner WS, Hulley SB. Conceiving the research question and developing the study plan. In: Cummings SR, Browner WS, Hulley SB, editors. Designing Clinical Research: An Epidemiological Approach. 4th ed. Philadelphia (PA): P Lippincott Williams & Wilkins; 2007. p. 14–22.
Eriksen MB, Frandsen TF. The impact of patient, intervention, comparison, outcome (PICO) as a search strategy tool on literature search quality: a systematic review. JMLA. 2018;106(4):420.
Ferrari R. Writing narrative style literature reviews. Medical Writing. 2015;24(4):230–5.
Flemming K, Booth A, Hannes K, Cargo M, Noyes J. Cochrane qualitative and implementation methods group guidance series – paper 6: reporting guidelines for qualitative, implementation, and process evaluation evidence syntheses. J Clin Epidemiol. 2018;97:79–85.
Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Inf Libr J. 2009;26(2):91–108.
Green BN, Johnson CD, Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. J Chiropr Med. 2006;5(3):101–17.
Gregory AT, Denniss AR. An introduction to writing narrative and systematic reviews; tasks, tips and traps for aspiring authors. Heart Lung Circ. 2018;27(7):893–8.
Harden A, Thomas J, Cargo M, et al. Cochrane qualitative and implementation methods group guidance series – paper 5: methods for integrating qualitative and implementation evidence within intervention effectiveness reviews. J Clin Epidemiol. 2018;97:70–8.
Harris JL, Booth A, Cargo M, et al. Cochrane qualitative and implementation methods group guidance series – paper 2: methods for question formulation, searching, and protocol development for qualitative evidence synthesis. J Clin Epidemiol. 2018;97:39–48.
Higgins J, Thomas J. In: Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane Handbook for Systematic Reviews of Interventions version 6.3, updated February 2022). Available from www.training.cochrane.org/handbook.: Cochrane; 2022.
International prospective register of systematic reviews (PROSPERO). Available from https://www.crd.york.ac.uk/prospero/ .
Khan KS, Kunz R, Kleijnen J, Antes G. Five steps to conducting a systematic review. J R Soc Med. 2003;96(3):118–21.
Landhuis E. Scientific literature: information overload. Nature. 2016;535(7612):457–8.
Lockwood C, Porritt K, Munn Z, Rittenmeyer L, Salmond S, Bjerrum M, Loveday H, Carrier J, Stannard D. Chapter 2: Systematic reviews of qualitative evidence. In: Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis. JBI; 2020. Available from https://synthesismanual.jbi.global . https://doi.org/10.46658/JBIMES-20-03 .
Chapter Google Scholar
Lorenzetti DL, Topfer L-A, Dennett L, Clement F. Value of databases other than medline for rapid health technology assessments. Int J Technol Assess Health Care. 2014;30(2):173–8.
Moher D, Liberati A, Tetzlaff J, Altman DG, the PRISMA Group. Preferred reporting items for (SR) and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;6:264–9.
Mulrow CD. Systematic reviews: rationale for systematic reviews. BMJ. 1994;309(6954):597–9.
Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18(1):143.
Munthe-Kaas HM, Glenton C, Booth A, Noyes J, Lewin S. Systematic mapping of existing tools to appraise methodological strengths and limitations of qualitative research: first stage in the development of the CAMELOT tool. BMC Med Res Methodol. 2019;19(1):1–13.
Murphy CM. Writing an effective review article. J Med Toxicol. 2012;8(2):89–90.
NHMRC. Guidelines for guidelines: assessing risk of bias. Available at https://nhmrc.gov.au/guidelinesforguidelines/develop/assessing-risk-bias . Last published 29 August 2019. Accessed 29 Aug 2022.
Noyes J, Booth A, Cargo M, et al. Cochrane qualitative and implementation methods group guidance series – paper 1: introduction. J Clin Epidemiol. 2018b;97:35–8.
Noyes J, Booth A, Flemming K, et al. Cochrane qualitative and implementation methods group guidance series – paper 3: methods for assessing methodological limitations, data extraction and synthesis, and confidence in synthesized qualitative findings. J Clin Epidemiol. 2018a;97:49–58.
Noyes J, Booth A, Moore G, Flemming K, Tunçalp Ö, Shakibazadeh E. Synthesising quantitative and qualitative evidence to inform guidelines on complex interventions: clarifying the purposes, designs and outlining some methods. BMJ Glob Health. 2019;4(Suppl 1):e000893.
Peters MD, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB. Guidance for conducting systematic scoping reviews. Int J Evid Healthcare. 2015;13(3):141–6.
Polanin JR, Pigott TD, Espelage DL, Grotpeter JK. Best practice guidelines for abstract screening large-evidence systematic reviews and meta-analyses. Res Synth Methods. 2019;10(3):330–42.
Article PubMed Central Google Scholar
Shea BJ, Grimshaw JM, Wells GA, et al. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7(1):1–7.
Shea BJ, Reeves BC, Wells G, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Brit Med J. 2017;358
Sterne JA, Hernán MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. Br Med J. 2016;355
Stroup DF, Berlin JA, Morton SC, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. JAMA. 2000;283(15):2008–12.
Tawfik GM, Dila KAS, Mohamed MYF, et al. A step by step guide for conducting a systematic review and meta-analysis with simulation data. Trop Med Health. 2019;47(1):1–9.
The Critical Appraisal Program. Critical appraisal skills program. Available at https://casp-uk.net/ . 2022. Accessed 29 Aug 2022.
The University of Melbourne. Writing a literature review in Research Techniques 2022. Available at https://students.unimelb.edu.au/academic-skills/explore-our-resources/research-techniques/reviewing-the-literature . Accessed 29 Aug 2022.
The Writing Center University of Winconsin-Madison. Learn how to write a literature review in The Writer’s Handbook – Academic Professional Writing. 2022. Available at https://writing.wisc.edu/handbook/assignments/reviewofliterature/ . Accessed 29 Aug 2022.
Thompson SG, Sharp SJ. Explaining heterogeneity in meta-analysis: a comparison of methods. Stat Med. 1999;18(20):2693–708.
Tricco AC, Lillie E, Zarin W, et al. A scoping review on the conduct and reporting of scoping reviews. BMC Med Res Methodol. 2016;16(1):15.
Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.
Yoneoka D, Henmi M. Clinical heterogeneity in random-effect meta-analysis: between-study boundary estimate problem. Stat Med. 2019;38(21):4131–45.
Yuan Y, Hunt RH. Systematic reviews: the good, the bad, and the ugly. Am J Gastroenterol. 2009;104(5):1086–92.
Download references
Author information
Authors and affiliations.
Centre of Excellence in Treatable Traits, College of Health, Medicine and Wellbeing, University of Newcastle, Hunter Medical Research Institute Asthma and Breathing Programme, Newcastle, NSW, Australia
Dennis Thomas
Department of Pharmacy Practice, Faculty of Pharmacy, Universitas Airlangga, Surabaya, Indonesia
Elida Zairina
Centre for Medicine Use and Safety, Monash Institute of Pharmaceutical Sciences, Faculty of Pharmacy and Pharmaceutical Sciences, Monash University, Parkville, VIC, Australia
Johnson George
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Johnson George .
Section Editor information
College of Pharmacy, Qatar University, Doha, Qatar
Derek Charles Stewart
Department of Pharmacy, University of Huddersfield, Huddersfield, United Kingdom
Zaheer-Ud-Din Babar
Rights and permissions
Reprints and permissions
Copyright information
© 2023 Springer Nature Switzerland AG
About this entry
Cite this entry.
Thomas, D., Zairina, E., George, J. (2023). Methodological Approaches to Literature Review. In: Encyclopedia of Evidence in Pharmaceutical Public Health and Health Services Research in Pharmacy. Springer, Cham. https://doi.org/10.1007/978-3-030-50247-8_57-1
Download citation
DOI : https://doi.org/10.1007/978-3-030-50247-8_57-1
Received : 22 February 2023
Accepted : 22 February 2023
Published : 09 May 2023
Publisher Name : Springer, Cham
Print ISBN : 978-3-030-50247-8
Online ISBN : 978-3-030-50247-8
eBook Packages : Springer Reference Biomedicine and Life Sciences Reference Module Biomedical and Life Sciences
- Publish with us
Policies and ethics
- Find a journal
- Track your research
An official website of the United States government
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
- Publications
- Account settings
- Advanced Search
- Journal List
An overview of methodological approaches in systematic reviews
Prabhakar veginadu, hanny calache, akshaya pandian, mohd masood.
- Author information
- Article notes
- Copyright and License information
Correspondence , Dr. Prabhakar Veginadu, Department of Rural Clinical Sciences, La Trobe University, PO Box 199, Bendigo, Victoria 3552, Australia. Email: [email protected]
Corresponding author.
Received 2021 Aug 8; Accepted 2022 Mar 18; Issue date 2022 Mar.
This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.
The aim of this overview is to identify and collate evidence from existing published systematic review (SR) articles evaluating various methodological approaches used at each stage of an SR.
The search was conducted in five electronic databases from inception to November 2020 and updated in February 2022: MEDLINE, Embase, Web of Science Core Collection, Cochrane Database of Systematic Reviews, and APA PsycINFO. Title and abstract screening were performed in two stages by one reviewer, supported by a second reviewer. Full‐text screening, data extraction, and quality appraisal were performed by two reviewers independently. The quality of the included SRs was assessed using the AMSTAR 2 checklist.
The search retrieved 41,556 unique citations, of which 9 SRs were deemed eligible for inclusion in final synthesis. Included SRs evaluated 24 unique methodological approaches used for defining the review scope and eligibility, literature search, screening, data extraction, and quality appraisal in the SR process. Limited evidence supports the following (a) searching multiple resources (electronic databases, handsearching, and reference lists) to identify relevant literature; (b) excluding non‐English, gray, and unpublished literature, and (c) use of text‐mining approaches during title and abstract screening.
The overview identified limited SR‐level evidence on various methodological approaches currently employed during five of the seven fundamental steps in the SR process, as well as some methodological modifications currently used in expedited SRs. Overall, findings of this overview highlight the dearth of published SRs focused on SR methodologies and this warrants future work in this area.
Keywords: knowledge synthesis, methodology, overview, systematic reviews
1. INTRODUCTION
Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the “gold standard” of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search, appraise, and synthesize the available evidence. 3 Several guidelines, developed by various organizations, are available for the conduct of an SR; 4 , 5 , 6 , 7 among these, Cochrane is considered a pioneer in developing rigorous and highly structured methodology for the conduct of SRs. 8 The guidelines developed by these organizations outline seven fundamental steps required in SR process: defining the scope of the review and eligibility criteria, literature searching and retrieval, selecting eligible studies, extracting relevant data, assessing risk of bias (RoB) in included studies, synthesizing results, and assessing certainty of evidence (CoE) and presenting findings. 4 , 5 , 6 , 7
The methodological rigor involved in an SR can require a significant amount of time and resource, which may not always be available. 9 As a result, there has been a proliferation of modifications made to the traditional SR process, such as refining, shortening, bypassing, or omitting one or more steps, 10 , 11 for example, limits on the number and type of databases searched, limits on publication date, language, and types of studies included, and limiting to one reviewer for screening and selection of studies, as opposed to two or more reviewers. 10 , 11 These methodological modifications are made to accommodate the needs of and resource constraints of the reviewers and stakeholders (e.g., organizations, policymakers, health care professionals, and other knowledge users). While such modifications are considered time and resource efficient, they may introduce bias in the review process reducing their usefulness. 5
Substantial research has been conducted examining various approaches used in the standardized SR methodology and their impact on the validity of SR results. There are a number of published reviews examining the approaches or modifications corresponding to single 12 , 13 or multiple steps 14 involved in an SR. However, there is yet to be a comprehensive summary of the SR‐level evidence for all the seven fundamental steps in an SR. Such a holistic evidence synthesis will provide an empirical basis to confirm the validity of current accepted practices in the conduct of SRs. Furthermore, sometimes there is a balance that needs to be achieved between the resource availability and the need to synthesize the evidence in the best way possible, given the constraints. This evidence base will also inform the choice of modifications to be made to the SR methods, as well as the potential impact of these modifications on the SR results. An overview is considered the choice of approach for summarizing existing evidence on a broad topic, directing the reader to evidence, or highlighting the gaps in evidence, where the evidence is derived exclusively from SRs. 15 Therefore, for this review, an overview approach was used to (a) identify and collate evidence from existing published SR articles evaluating various methodological approaches employed in each of the seven fundamental steps of an SR and (b) highlight both the gaps in the current research and the potential areas for future research on the methods employed in SRs.
An a priori protocol was developed for this overview but was not registered with the International Prospective Register of Systematic Reviews (PROSPERO), as the review was primarily methodological in nature and did not meet PROSPERO eligibility criteria for registration. The protocol is available from the corresponding author upon reasonable request. This overview was conducted based on the guidelines for the conduct of overviews as outlined in The Cochrane Handbook. 15 Reporting followed the Preferred Reporting Items for Systematic reviews and Meta‐analyses (PRISMA) statement. 3
2.1. Eligibility criteria
Only published SRs, with or without associated MA, were included in this overview. We adopted the defining characteristics of SRs from The Cochrane Handbook. 5 According to The Cochrane Handbook, a review was considered systematic if it satisfied the following criteria: (a) clearly states the objectives and eligibility criteria for study inclusion; (b) provides reproducible methodology; (c) includes a systematic search to identify all eligible studies; (d) reports assessment of validity of findings of included studies (e.g., RoB assessment of the included studies); (e) systematically presents all the characteristics or findings of the included studies. 5 Reviews that did not meet all of the above criteria were not considered a SR for this study and were excluded. MA‐only articles were included if it was mentioned that the MA was based on an SR.
SRs and/or MA of primary studies evaluating methodological approaches used in defining review scope and study eligibility, literature search, study selection, data extraction, RoB assessment, data synthesis, and CoE assessment and reporting were included. The methodological approaches examined in these SRs and/or MA can also be related to the substeps or elements of these steps; for example, applying limits on date or type of publication are the elements of literature search. Included SRs examined or compared various aspects of a method or methods, and the associated factors, including but not limited to: precision or effectiveness; accuracy or reliability; impact on the SR and/or MA results; reproducibility of an SR steps or bias occurred; time and/or resource efficiency. SRs assessing the methodological quality of SRs (e.g., adherence to reporting guidelines), evaluating techniques for building search strategies or the use of specific database filters (e.g., use of Boolean operators or search filters for randomized controlled trials), examining various tools used for RoB or CoE assessment (e.g., ROBINS vs. Cochrane RoB tool), or evaluating statistical techniques used in meta‐analyses were excluded. 14
2.2. Search
The search for published SRs was performed on the following scientific databases initially from inception to third week of November 2020 and updated in the last week of February 2022: MEDLINE (via Ovid), Embase (via Ovid), Web of Science Core Collection, Cochrane Database of Systematic Reviews, and American Psychological Association (APA) PsycINFO. Search was restricted to English language publications. Following the objectives of this study, study design filters within databases were used to restrict the search to SRs and MA, where available. The reference lists of included SRs were also searched for potentially relevant publications.
The search terms included keywords, truncations, and subject headings for the key concepts in the review question: SRs and/or MA, methods, and evaluation. Some of the terms were adopted from the search strategy used in a previous review by Robson et al., which reviewed primary studies on methodological approaches used in study selection, data extraction, and quality appraisal steps of SR process. 14 Individual search strategies were developed for respective databases by combining the search terms using appropriate proximity and Boolean operators, along with the related subject headings in order to identify SRs and/or MA. 16 , 17 A senior librarian was consulted in the design of the search terms and strategy. Appendix A presents the detailed search strategies for all five databases.
2.3. Study selection and data extraction
Title and abstract screening of references were performed in three steps. First, one reviewer (PV) screened all the titles and excluded obviously irrelevant citations, for example, articles on topics not related to SRs, non‐SR publications (such as randomized controlled trials, observational studies, scoping reviews, etc.). Next, from the remaining citations, a random sample of 200 titles and abstracts were screened against the predefined eligibility criteria by two reviewers (PV and MM), independently, in duplicate. Discrepancies were discussed and resolved by consensus. This step ensured that the responses of the two reviewers were calibrated for consistency in the application of the eligibility criteria in the screening process. Finally, all the remaining titles and abstracts were reviewed by a single “calibrated” reviewer (PV) to identify potential full‐text records. Full‐text screening was performed by at least two authors independently (PV screened all the records, and duplicate assessment was conducted by MM, HC, or MG), with discrepancies resolved via discussions or by consulting a third reviewer.
Data related to review characteristics, results, key findings, and conclusions were extracted by at least two reviewers independently (PV performed data extraction for all the reviews and duplicate extraction was performed by AP, HC, or MG).
2.4. Quality assessment of included reviews
The quality assessment of the included SRs was performed using the AMSTAR 2 (A MeaSurement Tool to Assess systematic Reviews). The tool consists of a 16‐item checklist addressing critical and noncritical domains. 18 For the purpose of this study, the domain related to MA was reclassified from critical to noncritical, as SRs with and without MA were included. The other six critical domains were used according to the tool guidelines. 18 Two reviewers (PV and AP) independently responded to each of the 16 items in the checklist with either “yes,” “partial yes,” or “no.” Based on the interpretations of the critical and noncritical domains, the overall quality of the review was rated as high, moderate, low, or critically low. 18 Disagreements were resolved through discussion or by consulting a third reviewer.
2.5. Data synthesis
To provide an understandable summary of existing evidence syntheses, characteristics of the methods evaluated in the included SRs were examined and key findings were categorized and presented based on the corresponding step in the SR process. The categories of key elements within each step were discussed and agreed by the authors. Results of the included reviews were tabulated and summarized descriptively, along with a discussion on any overlap in the primary studies. 15 No quantitative analyses of the data were performed.
From 41,556 unique citations identified through literature search, 50 full‐text records were reviewed, and nine systematic reviews 14 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 were deemed eligible for inclusion. The flow of studies through the screening process is presented in Figure 1 . A list of excluded studies with reasons can be found in Appendix B .
Study selection flowchart
3.1. Characteristics of included reviews
Table 1 summarizes the characteristics of included SRs. The majority of the included reviews (six of nine) were published after 2010. 14 , 22 , 23 , 24 , 25 , 26 Four of the nine included SRs were Cochrane reviews. 20 , 21 , 22 , 23 The number of databases searched in the reviews ranged from 2 to 14, 2 reviews searched gray literature sources, 24 , 25 and 7 reviews included a supplementary search strategy to identify relevant literature. 14 , 19 , 20 , 21 , 22 , 23 , 26 Three of the included SRs (all Cochrane reviews) included an integrated MA. 20 , 21 , 23
Characteristics of included studies
SR = systematic review; MA = meta‐analysis; RCT = randomized controlled trial; CCT = controlled clinical trial; N/R = not reported.
The included SRs evaluated 24 unique methodological approaches (26 in total) used across five steps in the SR process; 8 SRs evaluated 6 approaches, 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 while 1 review evaluated 18 approaches. 14 Exclusion of gray or unpublished literature 21 , 26 and blinding of reviewers for RoB assessment 14 , 23 were evaluated in two reviews each. Included SRs evaluated methods used in five different steps in the SR process, including methods used in defining the scope of review ( n = 3), literature search ( n = 3), study selection ( n = 2), data extraction ( n = 1), and RoB assessment ( n = 2) (Table 2 ).
Summary of findings from review evaluating systematic review methods
Includes databases (MEDLINE, Embase, PyscINFO, CINAHL, Biosis, CancerLIT, Cabnar, CENTRAL, Chirolars, HealthStar, SciCitIndex, Cochrane Central Trial Register), internet, and handsearching.
Includes MEDLINE, Embase, PsychLIT, PsychINFO, Lilac and Cochrane Central Trials Register; HSS‐Highly Sensitive Search; SR, systematic review; MA, meta‐analysis; RCT, randomized controlled trial; RoB, risk of bias.
There was some overlap in the primary studies evaluated in the included SRs on the same topics: Schmucker et al. 26 and Hopewell et al. 21 ( n = 4), Hopewell et al. 20 and Crumley et al. 19 ( n = 30), and Robson et al. 14 and Morissette et al. 23 ( n = 4). There were no conflicting results between any of the identified SRs on the same topic.
3.2. Methodological quality of included reviews
Overall, the quality of the included reviews was assessed as moderate at best (Table 2 ). The most common critical weakness in the reviews was failure to provide justification for excluding individual studies (four reviews). Detailed quality assessment is provided in Appendix C .
3.3. Evidence on systematic review methods
3.3.1. methods for defining review scope and eligibility.
Two SRs investigated the effect of excluding data obtained from gray or unpublished sources on the pooled effect estimates of MA. 21 , 26 Hopewell et al. 21 reviewed five studies that compared the impact of gray literature on the results of a cohort of MA of RCTs in health care interventions. Gray literature was defined as information published in “print or electronic sources not controlled by commercial or academic publishers.” Findings showed an overall greater treatment effect for published trials than trials reported in gray literature. In a more recent review, Schmucker et al. 26 addressed similar objectives, by investigating gray and unpublished data in medicine. In addition to gray literature, defined similar to the previous review by Hopewell et al., the authors also evaluated unpublished data—defined as “supplemental unpublished data related to published trials, data obtained from the Food and Drug Administration or other regulatory websites or postmarketing analyses hidden from the public.” The review found that in majority of the MA, excluding gray literature had little or no effect on the pooled effect estimates. The evidence was limited to conclude if the data from gray and unpublished literature had an impact on the conclusions of MA. 26
Morrison et al. 24 examined five studies measuring the effect of excluding non‐English language RCTs on the summary treatment effects of SR‐based MA in various fields of conventional medicine. Although none of the included studies reported major difference in the treatment effect estimates between English only and non‐English inclusive MA, the review found inconsistent evidence regarding the methodological and reporting quality of English and non‐English trials. 24 As such, there might be a risk of introducing “language bias” when excluding non‐English language RCTs. The authors also noted that the numbers of non‐English trials vary across medical specialties, as does the impact of these trials on MA results. Based on these findings, Morrison et al. 24 conclude that literature searches must include non‐English studies when resources and time are available to minimize the risk of introducing “language bias.”
3.3.2. Methods for searching studies
Crumley et al. 19 analyzed recall (also referred to as “sensitivity” by some researchers; defined as “percentage of relevant studies identified by the search”) and precision (defined as “percentage of studies identified by the search that were relevant”) when searching a single resource to identify randomized controlled trials and controlled clinical trials, as opposed to searching multiple resources. The studies included in their review frequently compared a MEDLINE only search with the search involving a combination of other resources. The review found low median recall estimates (median values between 24% and 92%) and very low median precisions (median values between 0% and 49%) for most of the electronic databases when searched singularly. 19 A between‐database comparison, based on the type of search strategy used, showed better recall and precision for complex and Cochrane Highly Sensitive search strategies (CHSSS). In conclusion, the authors emphasize that literature searches for trials in SRs must include multiple sources. 19
In an SR comparing handsearching and electronic database searching, Hopewell et al. 20 found that handsearching retrieved more relevant RCTs (retrieval rate of 92%−100%) than searching in a single electronic database (retrieval rates of 67% for PsycINFO/PsycLIT, 55% for MEDLINE, and 49% for Embase). The retrieval rates varied depending on the quality of handsearching, type of electronic search strategy used (e.g., simple, complex or CHSSS), and type of trial reports searched (e.g., full reports, conference abstracts, etc.). The authors concluded that handsearching was particularly important in identifying full trials published in nonindexed journals and in languages other than English, as well as those published as abstracts and letters. 20
The effectiveness of checking reference lists to retrieve additional relevant studies for an SR was investigated by Horsley et al. 22 The review reported that checking reference lists yielded 2.5%–40% more studies depending on the quality and comprehensiveness of the electronic search used. The authors conclude that there is some evidence, although from poor quality studies, to support use of checking reference lists to supplement database searching. 22
3.3.3. Methods for selecting studies
Three approaches relevant to reviewer characteristics, including number, experience, and blinding of reviewers involved in the screening process were highlighted in an SR by Robson et al. 14 Based on the retrieved evidence, the authors recommended that two independent, experienced, and unblinded reviewers be involved in study selection. 14 A modified approach has also been suggested by the review authors, where one reviewer screens and the other reviewer verifies the list of excluded studies, when the resources are limited. It should be noted however this suggestion is likely based on the authors’ opinion, as there was no evidence related to this from the studies included in the review.
Robson et al. 14 also reported two methods describing the use of technology for screening studies: use of Google Translate for translating languages (for example, German language articles to English) to facilitate screening was considered a viable method, while using two computer monitors for screening did not increase the screening efficiency in SR. Title‐first screening was found to be more efficient than simultaneous screening of titles and abstracts, although the gain in time with the former method was lesser than the latter. Therefore, considering that the search results are routinely exported as titles and abstracts, Robson et al. 14 recommend screening titles and abstracts simultaneously. However, the authors note that these conclusions were based on very limited number (in most instances one study per method) of low‐quality studies. 14
3.3.4. Methods for data extraction
Robson et al. 14 examined three approaches for data extraction relevant to reviewer characteristics, including number, experience, and blinding of reviewers (similar to the study selection step). Although based on limited evidence from a small number of studies, the authors recommended use of two experienced and unblinded reviewers for data extraction. The experience of the reviewers was suggested to be especially important when extracting continuous outcomes (or quantitative) data. However, when the resources are limited, data extraction by one reviewer and a verification of the outcomes data by a second reviewer was recommended.
As for the methods involving use of technology, Robson et al. 14 identified limited evidence on the use of two monitors to improve the data extraction efficiency and computer‐assisted programs for graphical data extraction. However, use of Google Translate for data extraction in non‐English articles was not considered to be viable. 14 In the same review, Robson et al. 14 identified evidence supporting contacting authors for obtaining additional relevant data.
3.3.5. Methods for RoB assessment
Two SRs examined the impact of blinding of reviewers for RoB assessments. 14 , 23 Morissette et al. 23 investigated the mean differences between the blinded and unblinded RoB assessment scores and found inconsistent differences among the included studies providing no definitive conclusions. Similar conclusions were drawn in a more recent review by Robson et al., 14 which included four studies on reviewer blinding for RoB assessment that completely overlapped with Morissette et al. 23
Use of experienced reviewers and provision of additional guidance for RoB assessment were examined by Robson et al. 14 The review concluded that providing intensive training and guidance on assessing studies reporting insufficient data to the reviewers improves RoB assessments. 14 Obtaining additional data related to quality assessment by contacting study authors was also found to help the RoB assessments, although based on limited evidence. When assessing the qualitative or mixed method reviews, Robson et al. 14 recommends the use of a structured RoB tool as opposed to an unstructured tool. No SRs were identified on data synthesis and CoE assessment and reporting steps.
4. DISCUSSION
4.1. summary of findings.
Nine SRs examining 24 unique methods used across five steps in the SR process were identified in this overview. The collective evidence supports some current traditional and modified SR practices, while challenging other approaches. However, the quality of the included reviews was assessed to be moderate at best and in the majority of the included SRs, evidence related to the evaluated methods was obtained from very limited numbers of primary studies. As such, the interpretations from these SRs should be made cautiously.
The evidence gathered from the included SRs corroborate a few current SR approaches. 5 For example, it is important to search multiple resources for identifying relevant trials (RCTs and/or CCTs). The resources must include a combination of electronic database searching, handsearching, and reference lists of retrieved articles. 5 However, no SRs have been identified that evaluated the impact of the number of electronic databases searched. A recent study by Halladay et al. 27 found that articles on therapeutic intervention, retrieved by searching databases other than PubMed (including Embase), contributed only a small amount of information to the MA and also had a minimal impact on the MA results. The authors concluded that when the resources are limited and when large number of studies are expected to be retrieved for the SR or MA, PubMed‐only search can yield reliable results. 27
Findings from the included SRs also reiterate some methodological modifications currently employed to “expedite” the SR process. 10 , 11 For example, excluding non‐English language trials and gray/unpublished trials from MA have been shown to have minimal or no impact on the results of MA. 24 , 26 However, the efficiency of these SR methods, in terms of time and the resources used, have not been evaluated in the included SRs. 24 , 26 Of the SRs included, only two have focused on the aspect of efficiency 14 , 25 ; O'Mara‐Eves et al. 25 report some evidence to support the use of text‐mining approaches for title and abstract screening in order to increase the rate of screening. Moreover, only one included SR 14 considered primary studies that evaluated reliability (inter‐ or intra‐reviewer consistency) and accuracy (validity when compared against a “gold standard” method) of the SR methods. This can be attributed to the limited number of primary studies that evaluated these outcomes when evaluating the SR methods. 14 Lack of outcome measures related to reliability, accuracy, and efficiency precludes making definitive recommendations on the use of these methods/modifications. Future research studies must focus on these outcomes.
Some evaluated methods may be relevant to multiple steps; for example, exclusions based on publication status (gray/unpublished literature) and language of publication (non‐English language studies) can be outlined in the a priori eligibility criteria or can be incorporated as search limits in the search strategy. SRs included in this overview focused on the effect of study exclusions on pooled treatment effect estimates or MA conclusions. Excluding studies from the search results, after conducting a comprehensive search, based on different eligibility criteria may yield different results when compared to the results obtained when limiting the search itself. 28 Further studies are required to examine this aspect.
Although we acknowledge the lack of standardized quality assessment tools for methodological study designs, we adhered to the Cochrane criteria for identifying SRs in this overview. This was done to ensure consistency in the quality of the included evidence. As a result, we excluded three reviews that did not provide any form of discussion on the quality of the included studies. The methods investigated in these reviews concern supplementary search, 29 data extraction, 12 and screening. 13 However, methods reported in two of these three reviews, by Mathes et al. 12 and Waffenschmidt et al., 13 have also been examined in the SR by Robson et al., 14 which was included in this overview; in most instances (with the exception of one study included in Mathes et al. 12 and Waffenschmidt et al. 13 each), the studies examined in these excluded reviews overlapped with those in the SR by Robson et al. 14
One of the key gaps in the knowledge observed in this overview was the dearth of SRs on the methods used in the data synthesis component of SR. Narrative and quantitative syntheses are the two most commonly used approaches for synthesizing data in evidence synthesis. 5 There are some published studies on the proposed indications and implications of these two approaches. 30 , 31 These studies found that both data synthesis methods produced comparable results and have their own advantages, suggesting that the choice of the method must be based on the purpose of the review. 31 With increasing number of “expedited” SR approaches (so called “rapid reviews”) avoiding MA, 10 , 11 further research studies are warranted in this area to determine the impact of the type of data synthesis on the results of the SR.
4.2. Implications for future research
The findings of this overview highlight several areas of paucity in primary research and evidence synthesis on SR methods. First, no SRs were identified on methods used in two important components of the SR process, including data synthesis and CoE and reporting. As for the included SRs, a limited number of evaluation studies have been identified for several methods. This indicates that further research is required to corroborate many of the methods recommended in current SR guidelines. 4 , 5 , 6 , 7 Second, some SRs evaluated the impact of methods on the results of quantitative synthesis and MA conclusions. Future research studies must also focus on the interpretations of SR results. 28 , 32 Finally, most of the included SRs were conducted on specific topics related to the field of health care, limiting the generalizability of the findings to other areas. It is important that future research studies evaluating evidence syntheses broaden the objectives and include studies on different topics within the field of health care.
4.3. Strengths and limitations
To our knowledge, this is the first overview summarizing current evidence from SRs and MA on different methodological approaches used in several fundamental steps in SR conduct. The overview methodology followed well established guidelines and strict criteria defined for the inclusion of SRs.
There are several limitations related to the nature of the included reviews. Evidence for most of the methods investigated in the included reviews was derived from a limited number of primary studies. Also, the majority of the included SRs may be considered outdated as they were published (or last updated) more than 5 years ago 33 ; only three of the nine SRs have been published in the last 5 years. 14 , 25 , 26 Therefore, important and recent evidence related to these topics may not have been included. Substantial numbers of included SRs were conducted in the field of health, which may limit the generalizability of the findings. Some method evaluations in the included SRs focused on quantitative analyses components and MA conclusions only. As such, the applicability of these findings to SR more broadly is still unclear. 28 Considering the methodological nature of our overview, limiting the inclusion of SRs according to the Cochrane criteria might have resulted in missing some relevant evidence from those reviews without a quality assessment component. 12 , 13 , 29 Although the included SRs performed some form of quality appraisal of the included studies, most of them did not use a standardized RoB tool, which may impact the confidence in their conclusions. Due to the type of outcome measures used for the method evaluations in the primary studies and the included SRs, some of the identified methods have not been validated against a reference standard.
Some limitations in the overview process must be noted. While our literature search was exhaustive covering five bibliographic databases and supplementary search of reference lists, no gray sources or other evidence resources were searched. Also, the search was primarily conducted in health databases, which might have resulted in missing SRs published in other fields. Moreover, only English language SRs were included for feasibility. As the literature search retrieved large number of citations (i.e., 41,556), the title and abstract screening was performed by a single reviewer, calibrated for consistency in the screening process by another reviewer, owing to time and resource limitations. These might have potentially resulted in some errors when retrieving and selecting relevant SRs. The SR methods were grouped based on key elements of each recommended SR step, as agreed by the authors. This categorization pertains to the identified set of methods and should be considered subjective.
5. CONCLUSIONS
This overview identified limited SR‐level evidence on various methodological approaches currently employed during five of the seven fundamental steps in the SR process. Limited evidence was also identified on some methodological modifications currently used to expedite the SR process. Overall, findings highlight the dearth of SRs on SR methodologies, warranting further work to confirm several current recommendations on conventional and expedited SR processes.
CONFLICT OF INTEREST
The authors declare no conflicts of interest.
Supporting information
APPENDIX A: Detailed search strategies
APPENDIX B: List of excluded studies with detailed reasons for exclusion
APPENDIX C: Quality assessment of included reviews using AMSTAR 2
ACKNOWLEDGMENTS
The first author is supported by a La Trobe University Full Fee Research Scholarship and a Graduate Research Scholarship.
Open Access Funding provided by La Trobe University.
Veginadu P, Calache H, Gussy M, Pandian A, Masood M. An overview of methodological approaches in systematic reviews. J Evid Based Med. 2022;15:39–54. 10.1111/jebm.12468
- 1. Ioannidis JPA. Evolution and translation of research findings: from bench to where. PLoS Clin Trials. 2006;1(7), e36. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 2. Crocetti E. Systematic reviews with meta‐analysis:why, when, and how? Emerg Adulthood. 2016;4(1):3–18. [ Google Scholar ]
- 3. Moher D, Liberati A, Tetzlaff J, Altman DG, The PG. Preferred reporting items for systematic reviews and meta‐analyses: the PRISMA statement. PLoS Med. 2009;6(7), e1000097. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 4. Akers J. Systematic Reviews: CRD's Guidance for Undertaking Reviews in Health Care. CRD, University of York; 2009. [ Google Scholar ]
- 5. Higgins JPT, Thomas J, Chandler J, et al., eds. Cochrane Handbook for Systematic Reviews of Interventions Version 6.3. Cochrane; 2022. http://www.training.cochrane.org/handbook . [updated February 2022]. Available from. [ Google Scholar ]
- 6. Joanna Briggs Institute . Joanna Briggs Institute Reviewers’ Manual: 2015 Edition/Supplement. The Joanna Briggs Institute; 2015. [ Google Scholar ]
- 7. Methods Group of the Campbell Collaboration . Methodological expectations of Campbell Collaboration intervention reviews: Conduct standards . 2016.
- 8. Chandler J, Hopewell S. Cochrane methods—twenty years experience in developing systematic review methods. Syst Rev. 2013;2(1):76. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 9. Tsertsvadze A, Chen Y‐F, Moher D, Sutcliffe P, McCarthy N. How to conduct systematic reviews more expeditiously? Syst Rev. 2015;4: 160–160. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 10. Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5: 56. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 11. Tricco AC, Antony J, Zarin W, et al. A scoping review of rapid review methods. BMC Med. 2015;13(1):224. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 12. Mathes T, Klasen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 13. Waffenschmidt S, Knelangen M, Sieben W, Buhn S, Pieper D. Single screening versus conventional double screening for study selection in systematic reviews: a methodological systematic review. BMC Med Res Methodol. 2019;19:132. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 14. Robson RC, Pham B, Hwee J, et al. Few studies exist examining methods for selecting studies, abstracting data, and appraising quality in a systematic review. J Clin Epidemiol. 2019;106: 121–135. [ DOI ] [ PubMed ] [ Google Scholar ]
- 15. Pollock M, Fernandes RM, Becker LA, Pieper D, Hartling L. Chapter V: overviews of reviews. In: Higgins JPT, Thomas J, Chandler J, et al., eds. Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated February 2022). Cochrane; 2022. Available from: http://www.training.cochrane.org/handbook [ Google Scholar ]
- 16. Montori VM, Wilczynski NL, Morgan D, Haynes RB. Optimal search strategies for retrieving systematic reviews from Medline: analytical survey. BMJ. 2005;330(7482):68. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 17. Wilczynski NL, Haynes RB. EMBASE search strategies achieved high sensitivity and specificity for retrieving methodologically sound systematic reviews. J Clin Epidemiol. 2007;60(1):29–33. [ DOI ] [ PubMed ] [ Google Scholar ]
- 18. Shea BJ, Reeves BC, Wells G, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non‐randomised studies of healthcare interventions, or both. BMJ. 2017;358: j4008. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 19. Crumley ET, Wiebe N, Cramer K, Klassen TP, Hartling L. Which resources should be used to identify RCT/CCTs for systematic reviews: a systematic review. BMC Med Res Methodol. 2005;5: 24. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 20. Hopewell S, Clarke MJ, Lefebvre C, Scherer RW. Handsearching versus electronic searching to identify reports of randomized trials. Cochrane Database Syst Rev. 2007;2007(2), MR000001. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 21. Hopewell S, McDonald S, Clarke M, Egger M. Grey literature in meta‐analyses of randomized trials of health care interventions. Cochrane Database Syst Rev. 2007(2), MR000010. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 22. Horsley T, Dingwall O, Sampson M. Checking reference lists to find additional studies for systematic reviews. Cochrane Database Syst Rev. 2011;2011(8), MR000026. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 23. Morissette K, Tricco AC, Horsley T, Chen MH, Moher D. Blinded versus unblinded assessments of risk of bias in studies included in a systematic review. Cochrane Database Syst Rev. 2011;2011(9), MR000025. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 24. Morrison A, Polisena J, Husereau D, et al. The effect of English‐language restriction on systematic review‐based meta‐analyses: a systematic review of empirical studies. Int J Technol Assess Health Care. 2012;28(2):138–144. [ DOI ] [ PubMed ] [ Google Scholar ]
- 25. O'Mara‐Eves A, Thomas J, McNaught J, Miwa M, Ananiadou S. Using text mining for study identification in systematic reviews: a systematic review of current approaches. Syst Rev. 2015;4(1):5. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 26. Schmucker CM, Blumle A, Schell LK, et al. Systematic review finds that study data not published in full text articles have unclear impact on meta‐analyses results in medical research. PLoS ONE [Electronic Resource]. 2017;12(4), e0176210. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 27. Halladay CW, Trikalinos TA, Schmid IT, Schmid CH, Dahabreh IJ. Using data sources beyond PubMed has a modest impact on the results of systematic reviews of therapeutic interventions. J Clin Epidemiol. 2015;68(9):1076–1084. [ DOI ] [ PubMed ] [ Google Scholar ]
- 28. Nussbaumer‐Streit B, Klerings I, Dobrescu A, et al. Excluding non‐English publications from evidence‐syntheses did not change conclusions: a meta‐epidemiological study. J Clin Epidemiol. 2020;118: 42–54. [ DOI ] [ PubMed ] [ Google Scholar ]
- 29. Cooper C, Booth A, Britten N, Garside R. A comparison of results of empirical studies of supplementary search techniques and recommendations in review methodology handbooks: a methodological review. Syst Rev. 2017;6(1):234. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 30. Melendez‐Torres GJ, O'Mara‐Eves A, Thomas J, Brunton G, Caird J, Petticrew M. Interpretive analysis of 85 systematic reviews suggests that narrative syntheses and meta‐analyses are incommensurate in argumentation. Res Synth Methods. 2017;8(1):109–118. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 31. Melendez‐Torres GJ, Thomas J, Lorenc T, O'Mara‐Eves A, Petticrew M. Just how plain are plain tobacco packs: re‐analysis of a systematic review using multilevel meta‐analysis suggests lessons about the comparative benefits of synthesis methods. Syst Rev. 2018;7(1):153. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 32. Nussbaumer‐Streit B, Klerings I, Wagner G, et al. Abbreviated literature searches were viable alternatives to comprehensive searches: a meta‐epidemiological study. J Clin Epidemiol. 2018;102: 1–11. [ DOI ] [ PubMed ] [ Google Scholar ]
- 33. Shojania KG, Sampson M, Ansari MT, Ji J, Doucette S, Moher D. How quickly do systematic reviews go out of date? A survival analysis. Ann Intern Med. 2007;147(4):224–233. [ DOI ] [ PubMed ] [ Google Scholar ]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
- View on publisher site
- PDF (452.5 KB)
- Collections
Similar articles
Cited by other articles, links to ncbi databases.
- Download .nbib .nbib
- Format: AMA APA MLA NLM
Add to Collections
How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses
Affiliations.
- 1 Behavioural Science Centre, Stirling Management School, University of Stirling, Stirling FK9 4LA, United Kingdom; email: [email protected].
- 2 Department of Psychological and Behavioural Science, London School of Economics and Political Science, London WC2A 2AE, United Kingdom.
- 3 Department of Statistics, Northwestern University, Evanston, Illinois 60208, USA; email: [email protected].
- PMID: 30089228
- DOI: 10.1146/annurev-psych-010418-102803
Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems. Although this guide targets psychological scientists, its high level of abstraction makes it potentially relevant to any subject area or discipline. We argue that systematic reviews are a key methodology for clarifying whether and how research findings replicate and for explaining possible inconsistencies, and we call for researchers to conduct systematic reviews to help elucidate whether there is a replication crisis.
Keywords: evidence; guide; meta-analysis; meta-synthesis; narrative; systematic review; theory.
- Guidelines as Topic
- Meta-Analysis as Topic*
- Publication Bias
- Review Literature as Topic
- Systematic Reviews as Topic*
Research Methods
- Getting Started
- Literature Review Research
- Research Design
- Research Design By Discipline
- SAGE Research Methods
- Teaching with SAGE Research Methods
Literature Review
- What is a Literature Review?
- What is NOT a Literature Review?
- Purposes of a Literature Review
- Types of Literature Reviews
- Literature Reviews vs. Systematic Reviews
- Systematic vs. Meta-Analysis
Literature Review is a comprehensive survey of the works published in a particular field of study or line of research, usually over a specific period of time, in the form of an in-depth, critical bibliographic essay or annotated list in which attention is drawn to the most significant works.
Also, we can define a literature review as the collected body of scholarly works related to a topic:
- Summarizes and analyzes previous research relevant to a topic
- Includes scholarly books and articles published in academic journals
- Can be an specific scholarly paper or a section in a research paper
The objective of a Literature Review is to find previous published scholarly works relevant to an specific topic
- Help gather ideas or information
- Keep up to date in current trends and findings
- Help develop new questions
A literature review is important because it:
- Explains the background of research on a topic.
- Demonstrates why a topic is significant to a subject area.
- Helps focus your own research questions or problems
- Discovers relationships between research studies/ideas.
- Suggests unexplored ideas or populations
- Identifies major themes, concepts, and researchers on a topic.
- Tests assumptions; may help counter preconceived ideas and remove unconscious bias.
- Identifies critical gaps, points of disagreement, or potentially flawed methodology or theoretical approaches.
- Indicates potential directions for future research.
All content in this section is from Literature Review Research from Old Dominion University
Keep in mind the following, a literature review is NOT:
Not an essay
Not an annotated bibliography in which you summarize each article that you have reviewed. A literature review goes beyond basic summarizing to focus on the critical analysis of the reviewed works and their relationship to your research question.
Not a research paper where you select resources to support one side of an issue versus another. A lit review should explain and consider all sides of an argument in order to avoid bias, and areas of agreement and disagreement should be highlighted.
A literature review serves several purposes. For example, it
- provides thorough knowledge of previous studies; introduces seminal works.
- helps focus one’s own research topic.
- identifies a conceptual framework for one’s own research questions or problems; indicates potential directions for future research.
- suggests previously unused or underused methodologies, designs, quantitative and qualitative strategies.
- identifies gaps in previous studies; identifies flawed methodologies and/or theoretical approaches; avoids replication of mistakes.
- helps the researcher avoid repetition of earlier research.
- suggests unexplored populations.
- determines whether past studies agree or disagree; identifies controversy in the literature.
- tests assumptions; may help counter preconceived ideas and remove unconscious bias.
As Kennedy (2007) notes*, it is important to think of knowledge in a given field as consisting of three layers. First, there are the primary studies that researchers conduct and publish. Second are the reviews of those studies that summarize and offer new interpretations built from and often extending beyond the original studies. Third, there are the perceptions, conclusions, opinion, and interpretations that are shared informally that become part of the lore of field. In composing a literature review, it is important to note that it is often this third layer of knowledge that is cited as "true" even though it often has only a loose relationship to the primary studies and secondary literature reviews.
Given this, while literature reviews are designed to provide an overview and synthesis of pertinent sources you have explored, there are several approaches to how they can be done, depending upon the type of analysis underpinning your study. Listed below are definitions of types of literature reviews:
Argumentative Review This form examines literature selectively in order to support or refute an argument, deeply imbedded assumption, or philosophical problem already established in the literature. The purpose is to develop a body of literature that establishes a contrarian viewpoint. Given the value-laden nature of some social science research [e.g., educational reform; immigration control], argumentative approaches to analyzing the literature can be a legitimate and important form of discourse. However, note that they can also introduce problems of bias when they are used to to make summary claims of the sort found in systematic reviews.
Integrative Review Considered a form of research that reviews, critiques, and synthesizes representative literature on a topic in an integrated way such that new frameworks and perspectives on the topic are generated. The body of literature includes all studies that address related or identical hypotheses. A well-done integrative review meets the same standards as primary research in regard to clarity, rigor, and replication.
Historical Review Few things rest in isolation from historical precedent. Historical reviews are focused on examining research throughout a period of time, often starting with the first time an issue, concept, theory, phenomena emerged in the literature, then tracing its evolution within the scholarship of a discipline. The purpose is to place research in a historical context to show familiarity with state-of-the-art developments and to identify the likely directions for future research.
Methodological Review A review does not always focus on what someone said [content], but how they said it [method of analysis]. This approach provides a framework of understanding at different levels (i.e. those of theory, substantive fields, research approaches and data collection and analysis techniques), enables researchers to draw on a wide variety of knowledge ranging from the conceptual level to practical documents for use in fieldwork in the areas of ontological and epistemological consideration, quantitative and qualitative integration, sampling, interviewing, data collection and data analysis, and helps highlight many ethical issues which we should be aware of and consider as we go through our study.
Systematic Review This form consists of an overview of existing evidence pertinent to a clearly formulated research question, which uses pre-specified and standardized methods to identify and critically appraise relevant research, and to collect, report, and analyse data from the studies that are included in the review. Typically it focuses on a very specific empirical question, often posed in a cause-and-effect form, such as "To what extent does A contribute to B?"
Theoretical Review The purpose of this form is to concretely examine the corpus of theory that has accumulated in regard to an issue, concept, theory, phenomena. The theoretical literature review help establish what theories already exist, the relationships between them, to what degree the existing theories have been investigated, and to develop new hypotheses to be tested. Often this form is used to help establish a lack of appropriate theories or reveal that current theories are inadequate for explaining new or emerging research problems. The unit of analysis can focus on a theoretical concept or a whole theory or framework.
* Kennedy, Mary M. "Defining a Literature." Educational Researcher 36 (April 2007): 139-147.
All content in this section is from The Literature Review created by Dr. Robert Larabee USC
Robinson, P. and Lowe, J. (2015), Literature reviews vs systematic reviews. Australian and New Zealand Journal of Public Health, 39: 103-103. doi: 10.1111/1753-6405.12393
What's in the name? The difference between a Systematic Review and a Literature Review, and why it matters . By Lynn Kysh from University of Southern California
Systematic review or meta-analysis?
A systematic review answers a defined research question by collecting and summarizing all empirical evidence that fits pre-specified eligibility criteria.
A meta-analysis is the use of statistical methods to summarize the results of these studies.
Systematic reviews, just like other research articles, can be of varying quality. They are a significant piece of work (the Centre for Reviews and Dissemination at York estimates that a team will take 9-24 months), and to be useful to other researchers and practitioners they should have:
- clearly stated objectives with pre-defined eligibility criteria for studies
- explicit, reproducible methodology
- a systematic search that attempts to identify all studies
- assessment of the validity of the findings of the included studies (e.g. risk of bias)
- systematic presentation, and synthesis, of the characteristics and findings of the included studies
Not all systematic reviews contain meta-analysis.
Meta-analysis is the use of statistical methods to summarize the results of independent studies. By combining information from all relevant studies, meta-analysis can provide more precise estimates of the effects of health care than those derived from the individual studies included within a review. More information on meta-analyses can be found in Cochrane Handbook, Chapter 9 .
A meta-analysis goes beyond critique and integration and conducts secondary statistical analysis on the outcomes of similar studies. It is a systematic review that uses quantitative methods to synthesize and summarize the results.
An advantage of a meta-analysis is the ability to be completely objective in evaluating research findings. Not all topics, however, have sufficient research evidence to allow a meta-analysis to be conducted. In that case, an integrative review is an appropriate strategy.
Some of the content in this section is from Systematic reviews and meta-analyses: step by step guide created by Kate McAllister.
- << Previous: Getting Started
- Next: Research Design >>
- Last Updated: Jul 15, 2024 10:34 AM
- URL: https://guides.lib.udel.edu/researchmethods
Purdue Online Writing Lab Purdue OWL® College of Liberal Arts
Writing a Literature Review
Welcome to the Purdue OWL
This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.
Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.
A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays). When we say “literature review” or refer to “the literature,” we are talking about the research ( scholarship ) in a given field. You will often see the terms “the research,” “the scholarship,” and “the literature” used mostly interchangeably.
Where, when, and why would I write a lit review?
There are a number of different situations where you might write a literature review, each with slightly different expectations; different disciplines, too, have field-specific expectations for what a literature review is and does. For instance, in the humanities, authors might include more overt argumentation and interpretation of source material in their literature reviews, whereas in the sciences, authors are more likely to report study designs and results in their literature reviews; these differences reflect these disciplines’ purposes and conventions in scholarship. You should always look at examples from your own discipline and talk to professors or mentors in your field to be sure you understand your discipline’s conventions, for literature reviews as well as for any other genre.
A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research methodology.
Lit reviews can also be standalone pieces, either as assignments in a class or as publications. In a class, a lit review may be assigned to help students familiarize themselves with a topic and with scholarship in their field, get an idea of the other researchers working on the topic they’re interested in, find gaps in existing research in order to propose new projects, and/or develop a theoretical framework and methodology for later research. As a publication, a lit review usually is meant to help make other scholars’ lives easier by collecting and summarizing, synthesizing, and analyzing existing research on a topic. This can be especially helpful for students or scholars getting into a new research area, or for directing an entire community of scholars toward questions that have not yet been answered.
What are the parts of a lit review?
Most lit reviews use a basic introduction-body-conclusion structure; if your lit review is part of a larger paper, the introduction and conclusion pieces may be just a few sentences while you focus most of your attention on the body. If your lit review is a standalone piece, the introduction and conclusion take up more space and give you a place to discuss your goals, research methods, and conclusions separately from where you discuss the literature itself.
Introduction:
- An introductory paragraph that explains what your working topic and thesis is
- A forecast of key topics or texts that will appear in the review
- Potentially, a description of how you found sources and how you analyzed them for inclusion and discussion in the review (more often found in published, standalone literature reviews than in lit review sections in an article or research paper)
- Summarize and synthesize: Give an overview of the main points of each source and combine them into a coherent whole
- Analyze and interpret: Don’t just paraphrase other researchers – add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
- Critically Evaluate: Mention the strengths and weaknesses of your sources
- Write in well-structured paragraphs: Use transition words and topic sentence to draw connections, comparisons, and contrasts.
Conclusion:
- Summarize the key findings you have taken from the literature and emphasize their significance
- Connect it back to your primary research question
How should I organize my lit review?
Lit reviews can take many different organizational patterns depending on what you are trying to accomplish with the review. Here are some examples:
- Chronological : The simplest approach is to trace the development of the topic over time, which helps familiarize the audience with the topic (for instance if you are introducing something that is not commonly known in your field). If you choose this strategy, be careful to avoid simply listing and summarizing sources in order. Try to analyze the patterns, turning points, and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred (as mentioned previously, this may not be appropriate in your discipline — check with a teacher or mentor if you’re unsure).
- Thematic : If you have found some recurring central themes that you will continue working with throughout your piece, you can organize your literature review into subsections that address different aspects of the topic. For example, if you are reviewing literature about women and religion, key themes can include the role of women in churches and the religious attitude towards women.
- Qualitative versus quantitative research
- Empirical versus theoretical scholarship
- Divide the research by sociological, historical, or cultural sources
- Theoretical : In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts. You can argue for the relevance of a specific theoretical approach or combine various theorical concepts to create a framework for your research.
What are some strategies or tips I can use while writing my lit review?
Any lit review is only as good as the research it discusses; make sure your sources are well-chosen and your research is thorough. Don’t be afraid to do more research if you discover a new thread as you’re writing. More info on the research process is available in our "Conducting Research" resources .
As you’re doing your research, create an annotated bibliography ( see our page on the this type of document ). Much of the information used in an annotated bibliography can be used also in a literature review, so you’ll be not only partially drafting your lit review as you research, but also developing your sense of the larger conversation going on among scholars, professionals, and any other stakeholders in your topic.
Usually you will need to synthesize research rather than just summarizing it. This means drawing connections between sources to create a picture of the scholarly conversation on a topic over time. Many student writers struggle to synthesize because they feel they don’t have anything to add to the scholars they are citing; here are some strategies to help you:
- It often helps to remember that the point of these kinds of syntheses is to show your readers how you understand your research, to help them read the rest of your paper.
- Writing teachers often say synthesis is like hosting a dinner party: imagine all your sources are together in a room, discussing your topic. What are they saying to each other?
- Look at the in-text citations in each paragraph. Are you citing just one source for each paragraph? This usually indicates summary only. When you have multiple sources cited in a paragraph, you are more likely to be synthesizing them (not always, but often
- Read more about synthesis here.
The most interesting literature reviews are often written as arguments (again, as mentioned at the beginning of the page, this is discipline-specific and doesn’t work for all situations). Often, the literature review is where you can establish your research as filling a particular gap or as relevant in a particular way. You have some chance to do this in your introduction in an article, but the literature review section gives a more extended opportunity to establish the conversation in the way you would like your readers to see it. You can choose the intellectual lineage you would like to be part of and whose definitions matter most to your thinking (mostly humanities-specific, but this goes for sciences as well). In addressing these points, you argue for your place in the conversation, which tends to make the lit review more compelling than a simple reporting of other sources.
- University of Texas Libraries
Literature Reviews
- What is a literature review?
- Steps in the Literature Review Process
- Define your research question
- Determine inclusion and exclusion criteria
- Choose databases and search
- Review Results
- Synthesize Results
- Analyze Results
- Librarian Support
- Artificial Intelligence (AI) Tools
What is a Literature Review?
A literature or narrative review is a comprehensive review and analysis of the published literature on a specific topic or research question. The literature that is reviewed contains: books, articles, academic articles, conference proceedings, association papers, and dissertations. It contains the most pertinent studies and points to important past and current research and practices. It provides background and context, and shows how your research will contribute to the field.
A literature review should:
- Provide a comprehensive and updated review of the literature;
- Explain why this review has taken place;
- Articulate a position or hypothesis;
- Acknowledge and account for conflicting and corroborating points of view
From S age Research Methods
Purpose of a Literature Review
A literature review can be written as an introduction to a study to:
- Demonstrate how a study fills a gap in research
- Compare a study with other research that's been done
Or it can be a separate work (a research article on its own) which:
- Organizes or describes a topic
- Describes variables within a particular issue/problem
Limitations of a Literature Review
Some of the limitations of a literature review are:
- It's a snapshot in time. Unlike other reviews, this one has beginning, a middle and an end. There may be future developments that could make your work less relevant.
- It may be too focused. Some niche studies may miss the bigger picture.
- It can be difficult to be comprehensive. There is no way to make sure all the literature on a topic was considered.
- It is easy to be biased if you stick to top tier journals. There may be other places where people are publishing exemplary research. Look to open access publications and conferences to reflect a more inclusive collection. Also, make sure to include opposing views (and not just supporting evidence).
Source: Grant, Maria J., and Andrew Booth. “A Typology of Reviews: An Analysis of 14 Review Types and Associated Methodologies.” Health Information & Libraries Journal, vol. 26, no. 2, June 2009, pp. 91–108. Wiley Online Library, doi:10.1111/j.1471-1842.2009.00848.x.
Librarian Assistance
For help, please contact the librarian for your subject area. We have a guide to library specialists by subject .
Periodically, UT Libraries runs a workshop covering the basics and library support for literature reviews. While we try to offer these once per academic year, we find providing the recording to be helpful to community members who have missed the session. Following is the most recent recording of the workshop, Conducting a Literature Review. To view the recording, a UT login is required.
- October 2 2, 2024
- Last Updated: Oct 23, 2024 11:46 AM
- URL: https://guides.lib.utexas.edu/literaturereviews
An official website of the United States government
The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
- Browse Titles
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.
Handbook of eHealth Evaluation: An Evidence-based Approach [Internet].
Chapter 9 methods for literature reviews.
Guy Paré and Spyros Kitsiou .
9.1. Introduction
Literature reviews play a critical role in scholarship because science remains, first and foremost, a cumulative endeavour ( vom Brocke et al., 2009 ). As in any academic discipline, rigorous knowledge syntheses are becoming indispensable in keeping up with an exponentially growing eHealth literature, assisting practitioners, academics, and graduate students in finding, evaluating, and synthesizing the contents of many empirical and conceptual papers. Among other methods, literature reviews are essential for: (a) identifying what has been written on a subject or topic; (b) determining the extent to which a specific research area reveals any interpretable trends or patterns; (c) aggregating empirical findings related to a narrow research question to support evidence-based practice; (d) generating new frameworks and theories; and (e) identifying topics or questions requiring more investigation ( Paré, Trudel, Jaana, & Kitsiou, 2015 ).
Literature reviews can take two major forms. The most prevalent one is the “literature review” or “background” section within a journal paper or a chapter in a graduate thesis. This section synthesizes the extant literature and usually identifies the gaps in knowledge that the empirical study addresses ( Sylvester, Tate, & Johnstone, 2013 ). It may also provide a theoretical foundation for the proposed study, substantiate the presence of the research problem, justify the research as one that contributes something new to the cumulated knowledge, or validate the methods and approaches for the proposed study ( Hart, 1998 ; Levy & Ellis, 2006 ).
The second form of literature review, which is the focus of this chapter, constitutes an original and valuable work of research in and of itself ( Paré et al., 2015 ). Rather than providing a base for a researcher’s own work, it creates a solid starting point for all members of the community interested in a particular area or topic ( Mulrow, 1987 ). The so-called “review article” is a journal-length paper which has an overarching purpose to synthesize the literature in a field, without collecting or analyzing any primary data ( Green, Johnson, & Adams, 2006 ).
When appropriately conducted, review articles represent powerful information sources for practitioners looking for state-of-the art evidence to guide their decision-making and work practices ( Paré et al., 2015 ). Further, high-quality reviews become frequently cited pieces of work which researchers seek out as a first clear outline of the literature when undertaking empirical studies ( Cooper, 1988 ; Rowe, 2014 ). Scholars who track and gauge the impact of articles have found that review papers are cited and downloaded more often than any other type of published article ( Cronin, Ryan, & Coughlan, 2008 ; Montori, Wilczynski, Morgan, Haynes, & Hedges, 2003 ; Patsopoulos, Analatos, & Ioannidis, 2005 ). The reason for their popularity may be the fact that reading the review enables one to have an overview, if not a detailed knowledge of the area in question, as well as references to the most useful primary sources ( Cronin et al., 2008 ). Although they are not easy to conduct, the commitment to complete a review article provides a tremendous service to one’s academic community ( Paré et al., 2015 ; Petticrew & Roberts, 2006 ). Most, if not all, peer-reviewed journals in the fields of medical informatics publish review articles of some type.
The main objectives of this chapter are fourfold: (a) to provide an overview of the major steps and activities involved in conducting a stand-alone literature review; (b) to describe and contrast the different types of review articles that can contribute to the eHealth knowledge base; (c) to illustrate each review type with one or two examples from the eHealth literature; and (d) to provide a series of recommendations for prospective authors of review articles in this domain.
9.2. Overview of the Literature Review Process and Steps
As explained in Templier and Paré (2015) , there are six generic steps involved in conducting a review article:
- formulating the research question(s) and objective(s),
- searching the extant literature,
- screening for inclusion,
- assessing the quality of primary studies,
- extracting data, and
- analyzing data.
Although these steps are presented here in sequential order, one must keep in mind that the review process can be iterative and that many activities can be initiated during the planning stage and later refined during subsequent phases ( Finfgeld-Connett & Johnson, 2013 ; Kitchenham & Charters, 2007 ).
Formulating the research question(s) and objective(s): As a first step, members of the review team must appropriately justify the need for the review itself ( Petticrew & Roberts, 2006 ), identify the review’s main objective(s) ( Okoli & Schabram, 2010 ), and define the concepts or variables at the heart of their synthesis ( Cooper & Hedges, 2009 ; Webster & Watson, 2002 ). Importantly, they also need to articulate the research question(s) they propose to investigate ( Kitchenham & Charters, 2007 ). In this regard, we concur with Jesson, Matheson, and Lacey (2011) that clearly articulated research questions are key ingredients that guide the entire review methodology; they underscore the type of information that is needed, inform the search for and selection of relevant literature, and guide or orient the subsequent analysis. Searching the extant literature: The next step consists of searching the literature and making decisions about the suitability of material to be considered in the review ( Cooper, 1988 ). There exist three main coverage strategies. First, exhaustive coverage means an effort is made to be as comprehensive as possible in order to ensure that all relevant studies, published and unpublished, are included in the review and, thus, conclusions are based on this all-inclusive knowledge base. The second type of coverage consists of presenting materials that are representative of most other works in a given field or area. Often authors who adopt this strategy will search for relevant articles in a small number of top-tier journals in a field ( Paré et al., 2015 ). In the third strategy, the review team concentrates on prior works that have been central or pivotal to a particular topic. This may include empirical studies or conceptual papers that initiated a line of investigation, changed how problems or questions were framed, introduced new methods or concepts, or engendered important debate ( Cooper, 1988 ). Screening for inclusion: The following step consists of evaluating the applicability of the material identified in the preceding step ( Levy & Ellis, 2006 ; vom Brocke et al., 2009 ). Once a group of potential studies has been identified, members of the review team must screen them to determine their relevance ( Petticrew & Roberts, 2006 ). A set of predetermined rules provides a basis for including or excluding certain studies. This exercise requires a significant investment on the part of researchers, who must ensure enhanced objectivity and avoid biases or mistakes. As discussed later in this chapter, for certain types of reviews there must be at least two independent reviewers involved in the screening process and a procedure to resolve disagreements must also be in place ( Liberati et al., 2009 ; Shea et al., 2009 ). Assessing the quality of primary studies: In addition to screening material for inclusion, members of the review team may need to assess the scientific quality of the selected studies, that is, appraise the rigour of the research design and methods. Such formal assessment, which is usually conducted independently by at least two coders, helps members of the review team refine which studies to include in the final sample, determine whether or not the differences in quality may affect their conclusions, or guide how they analyze the data and interpret the findings ( Petticrew & Roberts, 2006 ). Ascribing quality scores to each primary study or considering through domain-based evaluations which study components have or have not been designed and executed appropriately makes it possible to reflect on the extent to which the selected study addresses possible biases and maximizes validity ( Shea et al., 2009 ). Extracting data: The following step involves gathering or extracting applicable information from each primary study included in the sample and deciding what is relevant to the problem of interest ( Cooper & Hedges, 2009 ). Indeed, the type of data that should be recorded mainly depends on the initial research questions ( Okoli & Schabram, 2010 ). However, important information may also be gathered about how, when, where and by whom the primary study was conducted, the research design and methods, or qualitative/quantitative results ( Cooper & Hedges, 2009 ). Analyzing and synthesizing data : As a final step, members of the review team must collate, summarize, aggregate, organize, and compare the evidence extracted from the included studies. The extracted data must be presented in a meaningful way that suggests a new contribution to the extant literature ( Jesson et al., 2011 ). Webster and Watson (2002) warn researchers that literature reviews should be much more than lists of papers and should provide a coherent lens to make sense of extant knowledge on a given topic. There exist several methods and techniques for synthesizing quantitative (e.g., frequency analysis, meta-analysis) and qualitative (e.g., grounded theory, narrative analysis, meta-ethnography) evidence ( Dixon-Woods, Agarwal, Jones, Young, & Sutton, 2005 ; Thomas & Harden, 2008 ).
9.3. Types of Review Articles and Brief Illustrations
EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic. Our classification scheme is largely inspired from Paré and colleagues’ (2015) typology. Below we present and illustrate those review types that we feel are central to the growth and development of the eHealth domain.
9.3.1. Narrative Reviews
The narrative review is the “traditional” way of reviewing the extant literature and is skewed towards a qualitative interpretation of prior knowledge ( Sylvester et al., 2013 ). Put simply, a narrative review attempts to summarize or synthesize what has been written on a particular topic but does not seek generalization or cumulative knowledge from what is reviewed ( Davies, 2000 ; Green et al., 2006 ). Instead, the review team often undertakes the task of accumulating and synthesizing the literature to demonstrate the value of a particular point of view ( Baumeister & Leary, 1997 ). As such, reviewers may selectively ignore or limit the attention paid to certain studies in order to make a point. In this rather unsystematic approach, the selection of information from primary articles is subjective, lacks explicit criteria for inclusion and can lead to biased interpretations or inferences ( Green et al., 2006 ). There are several narrative reviews in the particular eHealth domain, as in all fields, which follow such an unstructured approach ( Silva et al., 2015 ; Paul et al., 2015 ).
Despite these criticisms, this type of review can be very useful in gathering together a volume of literature in a specific subject area and synthesizing it. As mentioned above, its primary purpose is to provide the reader with a comprehensive background for understanding current knowledge and highlighting the significance of new research ( Cronin et al., 2008 ). Faculty like to use narrative reviews in the classroom because they are often more up to date than textbooks, provide a single source for students to reference, and expose students to peer-reviewed literature ( Green et al., 2006 ). For researchers, narrative reviews can inspire research ideas by identifying gaps or inconsistencies in a body of knowledge, thus helping researchers to determine research questions or formulate hypotheses. Importantly, narrative reviews can also be used as educational articles to bring practitioners up to date with certain topics of issues ( Green et al., 2006 ).
Recently, there have been several efforts to introduce more rigour in narrative reviews that will elucidate common pitfalls and bring changes into their publication standards. Information systems researchers, among others, have contributed to advancing knowledge on how to structure a “traditional” review. For instance, Levy and Ellis (2006) proposed a generic framework for conducting such reviews. Their model follows the systematic data processing approach comprised of three steps, namely: (a) literature search and screening; (b) data extraction and analysis; and (c) writing the literature review. They provide detailed and very helpful instructions on how to conduct each step of the review process. As another methodological contribution, vom Brocke et al. (2009) offered a series of guidelines for conducting literature reviews, with a particular focus on how to search and extract the relevant body of knowledge. Last, Bandara, Miskon, and Fielt (2011) proposed a structured, predefined and tool-supported method to identify primary studies within a feasible scope, extract relevant content from identified articles, synthesize and analyze the findings, and effectively write and present the results of the literature review. We highly recommend that prospective authors of narrative reviews consult these useful sources before embarking on their work.
Darlow and Wen (2015) provide a good example of a highly structured narrative review in the eHealth field. These authors synthesized published articles that describe the development process of mobile health (m-health) interventions for patients’ cancer care self-management. As in most narrative reviews, the scope of the research questions being investigated is broad: (a) how development of these systems are carried out; (b) which methods are used to investigate these systems; and (c) what conclusions can be drawn as a result of the development of these systems. To provide clear answers to these questions, a literature search was conducted on six electronic databases and Google Scholar . The search was performed using several terms and free text words, combining them in an appropriate manner. Four inclusion and three exclusion criteria were utilized during the screening process. Both authors independently reviewed each of the identified articles to determine eligibility and extract study information. A flow diagram shows the number of studies identified, screened, and included or excluded at each stage of study selection. In terms of contributions, this review provides a series of practical recommendations for m-health intervention development.
9.3.2. Descriptive or Mapping Reviews
The primary goal of a descriptive review is to determine the extent to which a body of knowledge in a particular research topic reveals any interpretable pattern or trend with respect to pre-existing propositions, theories, methodologies or findings ( King & He, 2005 ; Paré et al., 2015 ). In contrast with narrative reviews, descriptive reviews follow a systematic and transparent procedure, including searching, screening and classifying studies ( Petersen, Vakkalanka, & Kuzniarz, 2015 ). Indeed, structured search methods are used to form a representative sample of a larger group of published works ( Paré et al., 2015 ). Further, authors of descriptive reviews extract from each study certain characteristics of interest, such as publication year, research methods, data collection techniques, and direction or strength of research outcomes (e.g., positive, negative, or non-significant) in the form of frequency analysis to produce quantitative results ( Sylvester et al., 2013 ). In essence, each study included in a descriptive review is treated as the unit of analysis and the published literature as a whole provides a database from which the authors attempt to identify any interpretable trends or draw overall conclusions about the merits of existing conceptualizations, propositions, methods or findings ( Paré et al., 2015 ). In doing so, a descriptive review may claim that its findings represent the state of the art in a particular domain ( King & He, 2005 ).
In the fields of health sciences and medical informatics, reviews that focus on examining the range, nature and evolution of a topic area are described by Anderson, Allen, Peckham, and Goodwin (2008) as mapping reviews . Like descriptive reviews, the research questions are generic and usually relate to publication patterns and trends. There is no preconceived plan to systematically review all of the literature although this can be done. Instead, researchers often present studies that are representative of most works published in a particular area and they consider a specific time frame to be mapped.
An example of this approach in the eHealth domain is offered by DeShazo, Lavallie, and Wolf (2009). The purpose of this descriptive or mapping review was to characterize publication trends in the medical informatics literature over a 20-year period (1987 to 2006). To achieve this ambitious objective, the authors performed a bibliometric analysis of medical informatics citations indexed in medline using publication trends, journal frequencies, impact factors, Medical Subject Headings (MeSH) term frequencies, and characteristics of citations. Findings revealed that there were over 77,000 medical informatics articles published during the covered period in numerous journals and that the average annual growth rate was 12%. The MeSH term analysis also suggested a strong interdisciplinary trend. Finally, average impact scores increased over time with two notable growth periods. Overall, patterns in research outputs that seem to characterize the historic trends and current components of the field of medical informatics suggest it may be a maturing discipline (DeShazo et al., 2009).
9.3.3. Scoping Reviews
Scoping reviews attempt to provide an initial indication of the potential size and nature of the extant literature on an emergent topic (Arksey & O’Malley, 2005; Daudt, van Mossel, & Scott, 2013 ; Levac, Colquhoun, & O’Brien, 2010). A scoping review may be conducted to examine the extent, range and nature of research activities in a particular area, determine the value of undertaking a full systematic review (discussed next), or identify research gaps in the extant literature ( Paré et al., 2015 ). In line with their main objective, scoping reviews usually conclude with the presentation of a detailed research agenda for future works along with potential implications for both practice and research.
Unlike narrative and descriptive reviews, the whole point of scoping the field is to be as comprehensive as possible, including grey literature (Arksey & O’Malley, 2005). Inclusion and exclusion criteria must be established to help researchers eliminate studies that are not aligned with the research questions. It is also recommended that at least two independent coders review abstracts yielded from the search strategy and then the full articles for study selection ( Daudt et al., 2013 ). The synthesized evidence from content or thematic analysis is relatively easy to present in tabular form (Arksey & O’Malley, 2005; Thomas & Harden, 2008 ).
One of the most highly cited scoping reviews in the eHealth domain was published by Archer, Fevrier-Thomas, Lokker, McKibbon, and Straus (2011) . These authors reviewed the existing literature on personal health record ( phr ) systems including design, functionality, implementation, applications, outcomes, and benefits. Seven databases were searched from 1985 to March 2010. Several search terms relating to phr s were used during this process. Two authors independently screened titles and abstracts to determine inclusion status. A second screen of full-text articles, again by two independent members of the research team, ensured that the studies described phr s. All in all, 130 articles met the criteria and their data were extracted manually into a database. The authors concluded that although there is a large amount of survey, observational, cohort/panel, and anecdotal evidence of phr benefits and satisfaction for patients, more research is needed to evaluate the results of phr implementations. Their in-depth analysis of the literature signalled that there is little solid evidence from randomized controlled trials or other studies through the use of phr s. Hence, they suggested that more research is needed that addresses the current lack of understanding of optimal functionality and usability of these systems, and how they can play a beneficial role in supporting patient self-management ( Archer et al., 2011 ).
9.3.4. Forms of Aggregative Reviews
Healthcare providers, practitioners, and policy-makers are nowadays overwhelmed with large volumes of information, including research-based evidence from numerous clinical trials and evaluation studies, assessing the effectiveness of health information technologies and interventions ( Ammenwerth & de Keizer, 2004 ; Deshazo et al., 2009 ). It is unrealistic to expect that all these disparate actors will have the time, skills, and necessary resources to identify the available evidence in the area of their expertise and consider it when making decisions. Systematic reviews that involve the rigorous application of scientific strategies aimed at limiting subjectivity and bias (i.e., systematic and random errors) can respond to this challenge.
Systematic reviews attempt to aggregate, appraise, and synthesize in a single source all empirical evidence that meet a set of previously specified eligibility criteria in order to answer a clearly formulated and often narrow research question on a particular topic of interest to support evidence-based practice ( Liberati et al., 2009 ). They adhere closely to explicit scientific principles ( Liberati et al., 2009 ) and rigorous methodological guidelines (Higgins & Green, 2008) aimed at reducing random and systematic errors that can lead to deviations from the truth in results or inferences. The use of explicit methods allows systematic reviews to aggregate a large body of research evidence, assess whether effects or relationships are in the same direction and of the same general magnitude, explain possible inconsistencies between study results, and determine the strength of the overall evidence for every outcome of interest based on the quality of included studies and the general consistency among them ( Cook, Mulrow, & Haynes, 1997 ). The main procedures of a systematic review involve:
- Formulating a review question and developing a search strategy based on explicit inclusion criteria for the identification of eligible studies (usually described in the context of a detailed review protocol).
- Searching for eligible studies using multiple databases and information sources, including grey literature sources, without any language restrictions.
- Selecting studies, extracting data, and assessing risk of bias in a duplicate manner using two independent reviewers to avoid random or systematic errors in the process.
- Analyzing data using quantitative or qualitative methods.
- Presenting results in summary of findings tables.
- Interpreting results and drawing conclusions.
Many systematic reviews, but not all, use statistical methods to combine the results of independent studies into a single quantitative estimate or summary effect size. Known as meta-analyses , these reviews use specific data extraction and statistical techniques (e.g., network, frequentist, or Bayesian meta-analyses) to calculate from each study by outcome of interest an effect size along with a confidence interval that reflects the degree of uncertainty behind the point estimate of effect ( Borenstein, Hedges, Higgins, & Rothstein, 2009 ; Deeks, Higgins, & Altman, 2008 ). Subsequently, they use fixed or random-effects analysis models to combine the results of the included studies, assess statistical heterogeneity, and calculate a weighted average of the effect estimates from the different studies, taking into account their sample sizes. The summary effect size is a value that reflects the average magnitude of the intervention effect for a particular outcome of interest or, more generally, the strength of a relationship between two variables across all studies included in the systematic review. By statistically combining data from multiple studies, meta-analyses can create more precise and reliable estimates of intervention effects than those derived from individual studies alone, when these are examined independently as discrete sources of information.
The review by Gurol-Urganci, de Jongh, Vodopivec-Jamsek, Atun, and Car (2013) on the effects of mobile phone messaging reminders for attendance at healthcare appointments is an illustrative example of a high-quality systematic review with meta-analysis. Missed appointments are a major cause of inefficiency in healthcare delivery with substantial monetary costs to health systems. These authors sought to assess whether mobile phone-based appointment reminders delivered through Short Message Service ( sms ) or Multimedia Messaging Service ( mms ) are effective in improving rates of patient attendance and reducing overall costs. To this end, they conducted a comprehensive search on multiple databases using highly sensitive search strategies without language or publication-type restrictions to identify all rct s that are eligible for inclusion. In order to minimize the risk of omitting eligible studies not captured by the original search, they supplemented all electronic searches with manual screening of trial registers and references contained in the included studies. Study selection, data extraction, and risk of bias assessments were performed independently by two coders using standardized methods to ensure consistency and to eliminate potential errors. Findings from eight rct s involving 6,615 participants were pooled into meta-analyses to calculate the magnitude of effects that mobile text message reminders have on the rate of attendance at healthcare appointments compared to no reminders and phone call reminders.
Meta-analyses are regarded as powerful tools for deriving meaningful conclusions. However, there are situations in which it is neither reasonable nor appropriate to pool studies together using meta-analytic methods simply because there is extensive clinical heterogeneity between the included studies or variation in measurement tools, comparisons, or outcomes of interest. In these cases, systematic reviews can use qualitative synthesis methods such as vote counting, content analysis, classification schemes and tabulations, as an alternative approach to narratively synthesize the results of the independent studies included in the review. This form of review is known as qualitative systematic review.
A rigorous example of one such review in the eHealth domain is presented by Mickan, Atherton, Roberts, Heneghan, and Tilson (2014) on the use of handheld computers by healthcare professionals and their impact on access to information and clinical decision-making. In line with the methodological guidelines for systematic reviews, these authors: (a) developed and registered with prospero ( www.crd.york.ac.uk/ prospero / ) an a priori review protocol; (b) conducted comprehensive searches for eligible studies using multiple databases and other supplementary strategies (e.g., forward searches); and (c) subsequently carried out study selection, data extraction, and risk of bias assessments in a duplicate manner to eliminate potential errors in the review process. Heterogeneity between the included studies in terms of reported outcomes and measures precluded the use of meta-analytic methods. To this end, the authors resorted to using narrative analysis and synthesis to describe the effectiveness of handheld computers on accessing information for clinical knowledge, adherence to safety and clinical quality guidelines, and diagnostic decision-making.
In recent years, the number of systematic reviews in the field of health informatics has increased considerably. Systematic reviews with discordant findings can cause great confusion and make it difficult for decision-makers to interpret the review-level evidence ( Moher, 2013 ). Therefore, there is a growing need for appraisal and synthesis of prior systematic reviews to ensure that decision-making is constantly informed by the best available accumulated evidence. Umbrella reviews , also known as overviews of systematic reviews, are tertiary types of evidence synthesis that aim to accomplish this; that is, they aim to compare and contrast findings from multiple systematic reviews and meta-analyses ( Becker & Oxman, 2008 ). Umbrella reviews generally adhere to the same principles and rigorous methodological guidelines used in systematic reviews. However, the unit of analysis in umbrella reviews is the systematic review rather than the primary study ( Becker & Oxman, 2008 ). Unlike systematic reviews that have a narrow focus of inquiry, umbrella reviews focus on broader research topics for which there are several potential interventions ( Smith, Devane, Begley, & Clarke, 2011 ). A recent umbrella review on the effects of home telemonitoring interventions for patients with heart failure critically appraised, compared, and synthesized evidence from 15 systematic reviews to investigate which types of home telemonitoring technologies and forms of interventions are more effective in reducing mortality and hospital admissions ( Kitsiou, Paré, & Jaana, 2015 ).
9.3.5. Realist Reviews
Realist reviews are theory-driven interpretative reviews developed to inform, enhance, or supplement conventional systematic reviews by making sense of heterogeneous evidence about complex interventions applied in diverse contexts in a way that informs policy decision-making ( Greenhalgh, Wong, Westhorp, & Pawson, 2011 ). They originated from criticisms of positivist systematic reviews which centre on their “simplistic” underlying assumptions ( Oates, 2011 ). As explained above, systematic reviews seek to identify causation. Such logic is appropriate for fields like medicine and education where findings of randomized controlled trials can be aggregated to see whether a new treatment or intervention does improve outcomes. However, many argue that it is not possible to establish such direct causal links between interventions and outcomes in fields such as social policy, management, and information systems where for any intervention there is unlikely to be a regular or consistent outcome ( Oates, 2011 ; Pawson, 2006 ; Rousseau, Manning, & Denyer, 2008 ).
To circumvent these limitations, Pawson, Greenhalgh, Harvey, and Walshe (2005) have proposed a new approach for synthesizing knowledge that seeks to unpack the mechanism of how “complex interventions” work in particular contexts. The basic research question — what works? — which is usually associated with systematic reviews changes to: what is it about this intervention that works, for whom, in what circumstances, in what respects and why? Realist reviews have no particular preference for either quantitative or qualitative evidence. As a theory-building approach, a realist review usually starts by articulating likely underlying mechanisms and then scrutinizes available evidence to find out whether and where these mechanisms are applicable ( Shepperd et al., 2009 ). Primary studies found in the extant literature are viewed as case studies which can test and modify the initial theories ( Rousseau et al., 2008 ).
The main objective pursued in the realist review conducted by Otte-Trojel, de Bont, Rundall, and van de Klundert (2014) was to examine how patient portals contribute to health service delivery and patient outcomes. The specific goals were to investigate how outcomes are produced and, most importantly, how variations in outcomes can be explained. The research team started with an exploratory review of background documents and research studies to identify ways in which patient portals may contribute to health service delivery and patient outcomes. The authors identified six main ways which represent “educated guesses” to be tested against the data in the evaluation studies. These studies were identified through a formal and systematic search in four databases between 2003 and 2013. Two members of the research team selected the articles using a pre-established list of inclusion and exclusion criteria and following a two-step procedure. The authors then extracted data from the selected articles and created several tables, one for each outcome category. They organized information to bring forward those mechanisms where patient portals contribute to outcomes and the variation in outcomes across different contexts.
9.3.6. Critical Reviews
Lastly, critical reviews aim to provide a critical evaluation and interpretive analysis of existing literature on a particular topic of interest to reveal strengths, weaknesses, contradictions, controversies, inconsistencies, and/or other important issues with respect to theories, hypotheses, research methods or results ( Baumeister & Leary, 1997 ; Kirkevold, 1997 ). Unlike other review types, critical reviews attempt to take a reflective account of the research that has been done in a particular area of interest, and assess its credibility by using appraisal instruments or critical interpretive methods. In this way, critical reviews attempt to constructively inform other scholars about the weaknesses of prior research and strengthen knowledge development by giving focus and direction to studies for further improvement ( Kirkevold, 1997 ).
Kitsiou, Paré, and Jaana (2013) provide an example of a critical review that assessed the methodological quality of prior systematic reviews of home telemonitoring studies for chronic patients. The authors conducted a comprehensive search on multiple databases to identify eligible reviews and subsequently used a validated instrument to conduct an in-depth quality appraisal. Results indicate that the majority of systematic reviews in this particular area suffer from important methodological flaws and biases that impair their internal validity and limit their usefulness for clinical and decision-making purposes. To this end, they provide a number of recommendations to strengthen knowledge development towards improving the design and execution of future reviews on home telemonitoring.
9.4. Summary
Table 9.1 outlines the main types of literature reviews that were described in the previous sub-sections and summarizes the main characteristics that distinguish one review type from another. It also includes key references to methodological guidelines and useful sources that can be used by eHealth scholars and researchers for planning and developing reviews.
Typology of Literature Reviews (adapted from Paré et al., 2015).
As shown in Table 9.1 , each review type addresses different kinds of research questions or objectives, which subsequently define and dictate the methods and approaches that need to be used to achieve the overarching goal(s) of the review. For example, in the case of narrative reviews, there is greater flexibility in searching and synthesizing articles ( Green et al., 2006 ). Researchers are often relatively free to use a diversity of approaches to search, identify, and select relevant scientific articles, describe their operational characteristics, present how the individual studies fit together, and formulate conclusions. On the other hand, systematic reviews are characterized by their high level of systematicity, rigour, and use of explicit methods, based on an “a priori” review plan that aims to minimize bias in the analysis and synthesis process (Higgins & Green, 2008). Some reviews are exploratory in nature (e.g., scoping/mapping reviews), whereas others may be conducted to discover patterns (e.g., descriptive reviews) or involve a synthesis approach that may include the critical analysis of prior research ( Paré et al., 2015 ). Hence, in order to select the most appropriate type of review, it is critical to know before embarking on a review project, why the research synthesis is conducted and what type of methods are best aligned with the pursued goals.
9.5. Concluding Remarks
In light of the increased use of evidence-based practice and research generating stronger evidence ( Grady et al., 2011 ; Lyden et al., 2013 ), review articles have become essential tools for summarizing, synthesizing, integrating or critically appraising prior knowledge in the eHealth field. As mentioned earlier, when rigorously conducted review articles represent powerful information sources for eHealth scholars and practitioners looking for state-of-the-art evidence. The typology of literature reviews we used herein will allow eHealth researchers, graduate students and practitioners to gain a better understanding of the similarities and differences between review types.
We must stress that this classification scheme does not privilege any specific type of review as being of higher quality than another ( Paré et al., 2015 ). As explained above, each type of review has its own strengths and limitations. Having said that, we realize that the methodological rigour of any review — be it qualitative, quantitative or mixed — is a critical aspect that should be considered seriously by prospective authors. In the present context, the notion of rigour refers to the reliability and validity of the review process described in section 9.2. For one thing, reliability is related to the reproducibility of the review process and steps, which is facilitated by a comprehensive documentation of the literature search process, extraction, coding and analysis performed in the review. Whether the search is comprehensive or not, whether it involves a methodical approach for data extraction and synthesis or not, it is important that the review documents in an explicit and transparent manner the steps and approach that were used in the process of its development. Next, validity characterizes the degree to which the review process was conducted appropriately. It goes beyond documentation and reflects decisions related to the selection of the sources, the search terms used, the period of time covered, the articles selected in the search, and the application of backward and forward searches ( vom Brocke et al., 2009 ). In short, the rigour of any review article is reflected by the explicitness of its methods (i.e., transparency) and the soundness of the approach used. We refer those interested in the concepts of rigour and quality to the work of Templier and Paré (2015) which offers a detailed set of methodological guidelines for conducting and evaluating various types of review articles.
To conclude, our main objective in this chapter was to demystify the various types of literature reviews that are central to the continuous development of the eHealth field. It is our hope that our descriptive account will serve as a valuable source for those conducting, evaluating or using reviews in this important and growing domain.
- Ammenwerth E., de Keizer N. An inventory of evaluation studies of information technology in health care. Trends in evaluation research, 1982-2002. International Journal of Medical Informatics. 2004; 44 (1):44–56. [ PubMed : 15778794 ]
- Anderson S., Allen P., Peckham S., Goodwin N. Asking the right questions: scoping studies in the commissioning of research on the organisation and delivery of health services. Health Research Policy and Systems. 2008; 6 (7):1–12. [ PMC free article : PMC2500008 ] [ PubMed : 18613961 ] [ CrossRef ]
- Archer N., Fevrier-Thomas U., Lokker C., McKibbon K. A., Straus S.E. Personal health records: a scoping review. Journal of American Medical Informatics Association. 2011; 18 (4):515–522. [ PMC free article : PMC3128401 ] [ PubMed : 21672914 ]
- Arksey H., O’Malley L. Scoping studies: towards a methodological framework. International Journal of Social Research Methodology. 2005; 8 (1):19–32.
- A systematic, tool-supported method for conducting literature reviews in information systems. Paper presented at the Proceedings of the 19th European Conference on Information Systems ( ecis 2011); June 9 to 11; Helsinki, Finland. 2011.
- Baumeister R. F., Leary M.R. Writing narrative literature reviews. Review of General Psychology. 1997; 1 (3):311–320.
- Becker L. A., Oxman A.D. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Overviews of reviews; pp. 607–631.
- Borenstein M., Hedges L., Higgins J., Rothstein H. Introduction to meta-analysis. Hoboken, nj : John Wiley & Sons Inc; 2009.
- Cook D. J., Mulrow C. D., Haynes B. Systematic reviews: Synthesis of best evidence for clinical decisions. Annals of Internal Medicine. 1997; 126 (5):376–380. [ PubMed : 9054282 ]
- Cooper H., Hedges L.V. In: The handbook of research synthesis and meta-analysis. 2nd ed. Cooper H., Hedges L. V., Valentine J. C., editors. New York: Russell Sage Foundation; 2009. Research synthesis as a scientific process; pp. 3–17.
- Cooper H. M. Organizing knowledge syntheses: A taxonomy of literature reviews. Knowledge in Society. 1988; 1 (1):104–126.
- Cronin P., Ryan F., Coughlan M. Undertaking a literature review: a step-by-step approach. British Journal of Nursing. 2008; 17 (1):38–43. [ PubMed : 18399395 ]
- Darlow S., Wen K.Y. Development testing of mobile health interventions for cancer patient self-management: A review. Health Informatics Journal. 2015 (online before print). [ PubMed : 25916831 ] [ CrossRef ]
- Daudt H. M., van Mossel C., Scott S.J. Enhancing the scoping study methodology: a large, inter-professional team’s experience with Arksey and O’Malley’s framework. bmc Medical Research Methodology. 2013; 13 :48. [ PMC free article : PMC3614526 ] [ PubMed : 23522333 ] [ CrossRef ]
- Davies P. The relevance of systematic reviews to educational policy and practice. Oxford Review of Education. 2000; 26 (3-4):365–378.
- Deeks J. J., Higgins J. P. T., Altman D.G. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Analysing data and undertaking meta-analyses; pp. 243–296.
- Deshazo J. P., Lavallie D. L., Wolf F.M. Publication trends in the medical informatics literature: 20 years of “Medical Informatics” in mesh . bmc Medical Informatics and Decision Making. 2009; 9 :7. [ PMC free article : PMC2652453 ] [ PubMed : 19159472 ] [ CrossRef ]
- Dixon-Woods M., Agarwal S., Jones D., Young B., Sutton A. Synthesising qualitative and quantitative evidence: a review of possible methods. Journal of Health Services Research and Policy. 2005; 10 (1):45–53. [ PubMed : 15667704 ]
- Finfgeld-Connett D., Johnson E.D. Literature search strategies for conducting knowledge-building and theory-generating qualitative systematic reviews. Journal of Advanced Nursing. 2013; 69 (1):194–204. [ PMC free article : PMC3424349 ] [ PubMed : 22591030 ]
- Grady B., Myers K. M., Nelson E. L., Belz N., Bennett L., Carnahan L. … Guidelines Working Group. Evidence-based practice for telemental health. Telemedicine Journal and E Health. 2011; 17 (2):131–148. [ PubMed : 21385026 ]
- Green B. N., Johnson C. D., Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. Journal of Chiropractic Medicine. 2006; 5 (3):101–117. [ PMC free article : PMC2647067 ] [ PubMed : 19674681 ]
- Greenhalgh T., Wong G., Westhorp G., Pawson R. Protocol–realist and meta-narrative evidence synthesis: evolving standards ( rameses ). bmc Medical Research Methodology. 2011; 11 :115. [ PMC free article : PMC3173389 ] [ PubMed : 21843376 ]
- Gurol-Urganci I., de Jongh T., Vodopivec-Jamsek V., Atun R., Car J. Mobile phone messaging reminders for attendance at healthcare appointments. Cochrane Database System Review. 2013; 12 cd 007458. [ PMC free article : PMC6485985 ] [ PubMed : 24310741 ] [ CrossRef ]
- Hart C. Doing a literature review: Releasing the social science research imagination. London: SAGE Publications; 1998.
- Higgins J. P. T., Green S., editors. Cochrane handbook for systematic reviews of interventions: Cochrane book series. Hoboken, nj : Wiley-Blackwell; 2008.
- Jesson J., Matheson L., Lacey F.M. Doing your literature review: traditional and systematic techniques. Los Angeles & London: SAGE Publications; 2011.
- King W. R., He J. Understanding the role and methods of meta-analysis in IS research. Communications of the Association for Information Systems. 2005; 16 :1.
- Kirkevold M. Integrative nursing research — an important strategy to further the development of nursing science and nursing practice. Journal of Advanced Nursing. 1997; 25 (5):977–984. [ PubMed : 9147203 ]
- Kitchenham B., Charters S. ebse Technical Report Version 2.3. Keele & Durham. uk : Keele University & University of Durham; 2007. Guidelines for performing systematic literature reviews in software engineering.
- Kitsiou S., Paré G., Jaana M. Systematic reviews and meta-analyses of home telemonitoring interventions for patients with chronic diseases: a critical assessment of their methodological quality. Journal of Medical Internet Research. 2013; 15 (7):e150. [ PMC free article : PMC3785977 ] [ PubMed : 23880072 ]
- Kitsiou S., Paré G., Jaana M. Effects of home telemonitoring interventions on patients with chronic heart failure: an overview of systematic reviews. Journal of Medical Internet Research. 2015; 17 (3):e63. [ PMC free article : PMC4376138 ] [ PubMed : 25768664 ]
- Levac D., Colquhoun H., O’Brien K. K. Scoping studies: advancing the methodology. Implementation Science. 2010; 5 (1):69. [ PMC free article : PMC2954944 ] [ PubMed : 20854677 ]
- Levy Y., Ellis T.J. A systems approach to conduct an effective literature review in support of information systems research. Informing Science. 2006; 9 :181–211.
- Liberati A., Altman D. G., Tetzlaff J., Mulrow C., Gøtzsche P. C., Ioannidis J. P. A. et al. Moher D. The prisma statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. Annals of Internal Medicine. 2009; 151 (4):W-65. [ PubMed : 19622512 ]
- Lyden J. R., Zickmund S. L., Bhargava T. D., Bryce C. L., Conroy M. B., Fischer G. S. et al. McTigue K. M. Implementing health information technology in a patient-centered manner: Patient experiences with an online evidence-based lifestyle intervention. Journal for Healthcare Quality. 2013; 35 (5):47–57. [ PubMed : 24004039 ]
- Mickan S., Atherton H., Roberts N. W., Heneghan C., Tilson J.K. Use of handheld computers in clinical practice: a systematic review. bmc Medical Informatics and Decision Making. 2014; 14 :56. [ PMC free article : PMC4099138 ] [ PubMed : 24998515 ]
- Moher D. The problem of duplicate systematic reviews. British Medical Journal. 2013; 347 (5040) [ PubMed : 23945367 ] [ CrossRef ]
- Montori V. M., Wilczynski N. L., Morgan D., Haynes R. B., Hedges T. Systematic reviews: a cross-sectional study of location and citation counts. bmc Medicine. 2003; 1 :2. [ PMC free article : PMC281591 ] [ PubMed : 14633274 ]
- Mulrow C. D. The medical review article: state of the science. Annals of Internal Medicine. 1987; 106 (3):485–488. [ PubMed : 3813259 ] [ CrossRef ]
- Evidence-based information systems: A decade later. Proceedings of the European Conference on Information Systems ; 2011. Retrieved from http://aisel .aisnet.org/cgi/viewcontent .cgi?article =1221&context =ecis2011 .
- Okoli C., Schabram K. A guide to conducting a systematic literature review of information systems research. ssrn Electronic Journal. 2010
- Otte-Trojel T., de Bont A., Rundall T. G., van de Klundert J. How outcomes are achieved through patient portals: a realist review. Journal of American Medical Informatics Association. 2014; 21 (4):751–757. [ PMC free article : PMC4078283 ] [ PubMed : 24503882 ]
- Paré G., Trudel M.-C., Jaana M., Kitsiou S. Synthesizing information systems knowledge: A typology of literature reviews. Information & Management. 2015; 52 (2):183–199.
- Patsopoulos N. A., Analatos A. A., Ioannidis J.P. A. Relative citation impact of various study designs in the health sciences. Journal of the American Medical Association. 2005; 293 (19):2362–2366. [ PubMed : 15900006 ]
- Paul M. M., Greene C. M., Newton-Dame R., Thorpe L. E., Perlman S. E., McVeigh K. H., Gourevitch M.N. The state of population health surveillance using electronic health records: A narrative review. Population Health Management. 2015; 18 (3):209–216. [ PubMed : 25608033 ]
- Pawson R. Evidence-based policy: a realist perspective. London: SAGE Publications; 2006.
- Pawson R., Greenhalgh T., Harvey G., Walshe K. Realist review—a new method of systematic review designed for complex policy interventions. Journal of Health Services Research & Policy. 2005; 10 (Suppl 1):21–34. [ PubMed : 16053581 ]
- Petersen K., Vakkalanka S., Kuzniarz L. Guidelines for conducting systematic mapping studies in software engineering: An update. Information and Software Technology. 2015; 64 :1–18.
- Petticrew M., Roberts H. Systematic reviews in the social sciences: A practical guide. Malden, ma : Blackwell Publishing Co; 2006.
- Rousseau D. M., Manning J., Denyer D. Evidence in management and organizational science: Assembling the field’s full weight of scientific knowledge through syntheses. The Academy of Management Annals. 2008; 2 (1):475–515.
- Rowe F. What literature review is not: diversity, boundaries and recommendations. European Journal of Information Systems. 2014; 23 (3):241–255.
- Shea B. J., Hamel C., Wells G. A., Bouter L. M., Kristjansson E., Grimshaw J. et al. Boers M. amstar is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. Journal of Clinical Epidemiology. 2009; 62 (10):1013–1020. [ PubMed : 19230606 ]
- Shepperd S., Lewin S., Straus S., Clarke M., Eccles M. P., Fitzpatrick R. et al. Sheikh A. Can we systematically review studies that evaluate complex interventions? PLoS Medicine. 2009; 6 (8):e1000086. [ PMC free article : PMC2717209 ] [ PubMed : 19668360 ]
- Silva B. M., Rodrigues J. J., de la Torre Díez I., López-Coronado M., Saleem K. Mobile-health: A review of current state in 2015. Journal of Biomedical Informatics. 2015; 56 :265–272. [ PubMed : 26071682 ]
- Smith V., Devane D., Begley C., Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. bmc Medical Research Methodology. 2011; 11 (1):15. [ PMC free article : PMC3039637 ] [ PubMed : 21291558 ]
- Sylvester A., Tate M., Johnstone D. Beyond synthesis: re-presenting heterogeneous research literature. Behaviour & Information Technology. 2013; 32 (12):1199–1215.
- Templier M., Paré G. A framework for guiding and evaluating literature reviews. Communications of the Association for Information Systems. 2015; 37 (6):112–137.
- Thomas J., Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. bmc Medical Research Methodology. 2008; 8 (1):45. [ PMC free article : PMC2478656 ] [ PubMed : 18616818 ]
- Reconstructing the giant: on the importance of rigour in documenting the literature search process. Paper presented at the Proceedings of the 17th European Conference on Information Systems ( ecis 2009); Verona, Italy. 2009.
- Webster J., Watson R.T. Analyzing the past to prepare for the future: Writing a literature review. Management Information Systems Quarterly. 2002; 26 (2):11.
- Whitlock E. P., Lin J. S., Chou R., Shekelle P., Robinson K.A. Using existing systematic reviews in complex systematic reviews. Annals of Internal Medicine. 2008; 148 (10):776–782. [ PubMed : 18490690 ]
This publication is licensed under a Creative Commons License, Attribution-Noncommercial 4.0 International License (CC BY-NC 4.0): see https://creativecommons.org/licenses/by-nc/4.0/
- Cite this Page Paré G, Kitsiou S. Chapter 9 Methods for Literature Reviews. In: Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.
- PDF version of this title (4.5M)
In this Page
- Introduction
- Overview of the Literature Review Process and Steps
- Types of Review Articles and Brief Illustrations
- Concluding Remarks
Related information
- PMC PubMed Central citations
- PubMed Links to PubMed
Recent Activity
- Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Ev... Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Evidence-based Approach
Your browsing activity is empty.
Activity recording is turned off.
Turn recording back on
Connect with NLM
National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894
Web Policies FOIA HHS Vulnerability Disclosure
Help Accessibility Careers
Teaching and Research guides
Literature reviews.
- Introduction
- Plan your search
- Where to search
- Refine and update your search
- Writing the review
- Referencing
Research methods overview
Finding literature on research methodologies, sage research methods online.
- Get material not at RMIT
- Further help
What are research methods?
Research methodology is the specific strategies, processes, or techniques utilised in the collection of information that is created and analysed.
The methodology section of a research paper, or thesis, enables the reader to critically evaluate the study’s validity and reliability by addressing how the data was collected or generated, and how it was analysed.
Types of research methods
There are three main types of research methods which use different designs for data collection.
(1) Qualitative research
Qualitative research gathers data about lived experiences, emotions or behaviours, and the meanings individuals attach to them. It assists in enabling researchers to gain a better understanding of complex concepts, social interactions or cultural phenomena. This type of research is useful in the exploration of how or why things have occurred, interpreting events and describing actions.
Examples of qualitative research designs include:
- focus groups
- observations
- document analysis
- oral history or life stories
(2) Quantitative research
Quantitative research gathers numerical data which can be ranked, measured or categorised through statistical analysis. It assists with uncovering patterns or relationships, and for making generalisations. This type of research is useful for finding out how many, how much, how often, or to what extent.
Examples of quantitative research designs include:
- surveys or questionnaires
- observation
- document screening
- experiments
(3) Mixed method research
Mixed Methods research integrates both Qualitative research and Quantitative research. It provides a holistic approach combining and analysing the statistical data with deeper contextualised insights. Using Mixed Methods also enables triangulation, or verification, of the data from two or more sources.
Sometimes in your literature review, you might need to discuss and evaluate relevant research methodologies in order to justify your own choice of research methodology.
When searching for literature on research methodologies it is important to search across a range of sources. No single information source will supply all that you need. Selecting appropriate sources will depend upon your research topic.
Developing a robust search strategy will help reduce irrelevant results. It is good practice to plan a strategy before you start to search.
Search tips
(1) free text keywords.
Free text searching is the use of natural language words to conduct your search. Use selective free text keywords such as: phenomenological, "lived experience", "grounded theory", "life experiences", "focus groups", interview, quantitative, survey, validity, variance, correlation and statistical.
To locate books on your desired methodology, try LibrarySearch . Remember to use refine options such as books, ebooks, subject, and publication date.
(2) Subject headings in Databases
Databases categorise their records using subject terms, or a controlled vocabulary (thesaurus). These subject headings may be useful to use, in addition to utilising free text keywords in a database search.
Subject headings will differ across databases, for example, the PubMed database uses 'Qualitative Research' whilst the CINHAL database uses 'Qualitative Studies.'
(3) Limiting search results
Databases enable sets of results to be limited or filtered by specific fields, look for options such as Publication Type, Article Type, etc. and apply them to your search.
(4) Browse the Library shelves
To find books on research methods browse the Library shelves at call number 001.42
- SAGE Research Methods Online SAGE Research Methods Online (SRMO) is a research tool supported by a newly devised taxonomy that links content and methods terms. It provides the most comprehensive picture available today of research methods (quantitative, qualitative and mixed methods) across the social and behavioural sciences.
SAGE Research Methods Overview (2:07 min) by SAGE Publishing ( YouTube )
- << Previous: Referencing
- Next: Get material not at RMIT >>
- Last Updated: Nov 1, 2024 7:39 AM
- URL: https://rmit.libguides.com/literature-review
- Open access
- Published: 08 October 2021
Scoping reviews: reinforcing and advancing the methodology and application
- Micah D. J. Peters 1 , 2 , 3 ,
- Casey Marnie 1 ,
- Heather Colquhoun 4 , 5 ,
- Chantelle M. Garritty 6 ,
- Susanne Hempel 7 ,
- Tanya Horsley 8 ,
- Etienne V. Langlois 9 ,
- Erin Lillie 10 ,
- Kelly K. O’Brien 5 , 11 , 12 ,
- Ӧzge Tunçalp 13 ,
- Michael G. Wilson 14 , 15 , 16 ,
- Wasifa Zarin 17 &
- Andrea C. Tricco ORCID: orcid.org/0000-0002-4114-8971 17 , 18 , 19
Systematic Reviews volume 10 , Article number: 263 ( 2021 ) Cite this article
53k Accesses
251 Citations
11 Altmetric
Metrics details
Scoping reviews are an increasingly common approach to evidence synthesis with a growing suite of methodological guidance and resources to assist review authors with their planning, conduct and reporting. The latest guidance for scoping reviews includes the JBI methodology and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses—Extension for Scoping Reviews. This paper provides readers with a brief update regarding ongoing work to enhance and improve the conduct and reporting of scoping reviews as well as information regarding the future steps in scoping review methods development. The purpose of this paper is to provide readers with a concise source of information regarding the difference between scoping reviews and other review types, the reasons for undertaking scoping reviews, and an update on methodological guidance for the conduct and reporting of scoping reviews.
Despite available guidance, some publications use the term ‘scoping review’ without clear consideration of available reporting and methodological tools. Selection of the most appropriate review type for the stated research objectives or questions, standardised use of methodological approaches and terminology in scoping reviews, clarity and consistency of reporting and ensuring that the reporting and presentation of the results clearly addresses the review’s objective(s) and question(s) are critical components for improving the rigour of scoping reviews.
Rigourous, high-quality scoping reviews should clearly follow up to date methodological guidance and reporting criteria. Stakeholder engagement is one area where further work could occur to enhance integration of consultation with the results of evidence syntheses and to support effective knowledge translation. Scoping review methodology is evolving as a policy and decision-making tool. Ensuring the integrity of scoping reviews by adherence to up-to-date reporting standards is integral to supporting well-informed decision-making.
Peer Review reports
Introduction
Given the readily increasing access to evidence and data, methods of identifying, charting and reporting on information must be driven by new, user-friendly approaches. Since 2005, when the first framework for scoping reviews was published, several more detailed approaches (both methodological guidance and a reporting guideline) have been developed. Scoping reviews are an increasingly common approach to evidence synthesis which is very popular amongst end users [ 1 ]. Indeed, one scoping review of scoping reviews found that 53% (262/494) of scoping reviews had government authorities and policymakers as their target end-user audience [ 2 ]. Scoping reviews can provide end users with important insights into the characteristics of a body of evidence, the ways, concepts or terms have been used, and how a topic has been reported upon. Scoping reviews can provide overviews of either broad or specific research and policy fields, underpin research and policy agendas, highlight knowledge gaps and identify areas for subsequent evidence syntheses [ 3 ].
Despite or even potentially because of the range of different approaches to conducting and reporting scoping reviews that have emerged since Arksey and O’Malley’s first framework in 2005, it appears that lack of consistency in use of terminology, conduct and reporting persist [ 2 , 4 ]. There are many examples where manuscripts are titled ‘a scoping review’ without citing or appearing to follow any particular approach [ 5 , 6 , 7 , 8 , 9 ]. This is similar to how many reviews appear to misleadingly include ‘systematic’ in the title or purport to have adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement without doing so. Despite the publication of the PRISMA Extension for Scoping Reviews (PRISMA-ScR) and other recent guidance [ 4 , 10 , 11 , 12 , 13 , 14 ], many scoping reviews continue to be conducted and published without apparent (i.e. cited) consideration of these tools or only cursory reference to Arksey and O’Malley’s original framework. We can only speculate at this stage why many authors appear to be either unaware of or unwilling to adopt more recent methodological guidance and reporting items in their work. It could be that some authors are more familiar and comfortable with the older, less prescriptive framework and see no reason to change. It could be that more recent methodologies such as JBI’s guidance and the PRISMA-ScR appear more complicated and onerous to comply with and so may possibly be unfit for purpose from the perspective of some authors. In their 2005 publication, Arksey and O’Malley themselves called for scoping review (then scoping study) methodology to continue to be advanced and built upon by subsequent authors, so it is interesting to note a persistent resistance or lack of awareness from some authors. Whatever the reason or reasons, we contend that transparency and reproducibility are key markers of high-quality reporting of scoping reviews and that reporting a review’s conduct and results clearly and consistently in line with a recognised methodology or checklist is more likely than not to enhance rigour and utility. Scoping reviews should not be used as a synonym for an exploratory search or general review of the literature. Instead, it is critical that potential authors recognise the purpose and methodology of scoping reviews. In this editorial, we discuss the definition of scoping reviews, introduce contemporary methodological guidance and address the circumstances where scoping reviews may be conducted. Finally, we briefly consider where ongoing advances in the methodology are occurring.
What is a scoping review and how is it different from other evidence syntheses?
A scoping review is a type of evidence synthesis that has the objective of identifying and mapping relevant evidence that meets pre-determined inclusion criteria regarding the topic, field, context, concept or issue under review. The review question guiding a scoping review is typically broader than that of a traditional systematic review. Scoping reviews may include multiple types of evidence (i.e. different research methodologies, primary research, reviews, non-empirical evidence). Because scoping reviews seek to develop a comprehensive overview of the evidence rather than a quantitative or qualitative synthesis of data, it is not usually necessary to undertake methodological appraisal/risk of bias assessment of the sources included in a scoping review. Scoping reviews systematically identify and chart relevant literature that meet predetermined inclusion criteria available on a given topic to address specified objective(s) and review question(s) in relation to key concepts, theories, data and evidence gaps. Scoping reviews are unlike ‘evidence maps’ which can be defined as the figural or graphical presentation of the results of a broad and systematic search to identify gaps in knowledge and/or future research needs often using a searchable database [ 15 ]. Evidence maps can be underpinned by a scoping review or be used to present the results of a scoping review. Scoping reviews are similar to but distinct from other well-known forms of evidence synthesis of which there are many [ 16 ]. Whilst this paper’s purpose is not to go into depth regarding the similarities and differences between scoping reviews and the diverse range of other evidence synthesis approaches, Munn and colleagues recently discussed the key differences between scoping reviews and other common review types [ 3 ]. Like integrative reviews and narrative literature reviews, scoping reviews can include both research (i.e. empirical) and non-research evidence (grey literature) such as policy documents and online media [ 17 , 18 ]. Scoping reviews also address broader questions beyond the effectiveness of a given intervention typical of ‘traditional’ (i.e. Cochrane) systematic reviews or peoples’ experience of a particular phenomenon of interest (i.e. JBI systematic review of qualitative evidence). Scoping reviews typically identify, present and describe relevant characteristics of included sources of evidence rather than seeking to combine statistical or qualitative data from different sources to develop synthesised results.
Similar to systematic reviews, the conduct of scoping reviews should be based on well-defined methodological guidance and reporting standards that include an a priori protocol, eligibility criteria and comprehensive search strategy [ 11 , 12 ]. Unlike systematic reviews, however, scoping reviews may be iterative and flexible and whilst any deviations from the protocol should be transparently reported, adjustments to the questions, inclusion/exclusion criteria and search may be made during the conduct of the review [ 4 , 14 ]. Unlike systematic reviews where implications or recommendations for practice are a key feature, scoping reviews are not designed to underpin clinical practice decisions; hence, assessment of methodological quality or risk of bias of included studies (which is critical when reporting effect size estimates) is not a mandatory step and often does not occur [ 10 , 12 ]. Rapid reviews are another popular review type, but as yet have no consistent, best practice methodology [ 19 ]. Rapid reviews can be understood to be streamlined forms of other review types (i.e. systematic, integrative and scoping reviews) [ 20 ].
Guidance to improve the quality of reporting of scoping reviews
Since the first 2005 framework for scoping reviews (then termed ‘scoping studies’) [ 13 ], the popularity of this approach has grown, with numbers doubling between 2014 and 2017 [ 2 ]. The PRISMA-ScR is the most up-to-date and advanced approach for reporting scoping reviews which is largely based on the popular PRISMA statement and checklist, the JBI methodological guidance and other approaches for undertaking scoping reviews [ 11 ]. Experts in evidence synthesis including authors of earlier guidance for scoping reviews developed the PRISMA-ScR checklist and explanation using a robust and comprehensive approach. Enhancing transparency and uniformity of reporting scoping reviews using the PRISMA-ScR can help to improve the quality and value of a scoping review to readers and end users [ 21 ]. The PRISMA-ScR is not a methodological guideline for review conduct, but rather a complementary checklist to support comprehensive reporting of methods and findings that can be used alongside other methodological guidance [ 10 , 12 , 13 , 14 ]. For this reason, authors who are more familiar with or prefer Arksey and O’Malley’s framework; Levac, Colquhoun and O’Brien’s extension of that framework or JBI’s methodological guidance could each select their preferred methodological approach and report in accordance with the PRISMA-ScR checklist.
Reasons for conducting a scoping review
Whilst systematic reviews sit at the top of the evidence hierarchy, the types of research questions they address are not suitable for every application [ 3 ]. Many indications more appropriately require a scoping review. For example, to explore the extent and nature of a body of literature, the development of evidence maps and summaries; to inform future research and reviews and to identify evidence gaps [ 2 ]. Scoping reviews are particularly useful where evidence is extensive and widely dispersed (i.e. many different types of evidence), or emerging and not yet amenable to questions of effectiveness [ 22 ]. Because scoping reviews are agnostic in terms of the types of evidence they can draw upon, they can be used to bring together and report upon heterogeneous literature—including both empirical and non-empirical evidence—across disciplines within and beyond health [ 23 , 24 , 25 ].
When deciding between whether to conduct a systematic review or a scoping review, authors should have a strong understanding of their differences and be able to clearly identify their review’s precise research objective(s) and/or question(s). Munn and colleagues noted that a systematic review is likely the most suitable approach if reviewers intend to address questions regarding the feasibility, appropriateness, meaningfulness or effectiveness of a specified intervention [ 3 ]. There are also online resources for prospective authors [ 26 ]. A scoping review is probably best when research objectives or review questions involve exploring, identifying, mapping, reporting or discussing characteristics or concepts across a breadth of evidence sources.
Scoping reviews are increasingly used to respond to complex questions where comparing interventions may be neither relevant nor possible [ 27 ]. Often, cost, time, and resources are factors in decisions regarding review type. Whilst many scoping reviews can be quite large with numerous sources to screen and/or include, there is no expectation or possibility of statistical pooling, formal risk of bias rating, and quality of evidence assessment [ 28 , 29 ]. Topics where scoping reviews are necessary abound—for example, government organisations are often interested in the availability and applicability of tools to support health interventions, such as shared decision aids for pregnancy care [ 30 ]. Scoping reviews can also be applied to better understand complex issues related to the health workforce, such as how shift work impacts employee performance across diverse occupational sectors, which involves a diversity of evidence types as well as attention to knowledge gaps [ 31 ]. Another example is where more conceptual knowledge is required, for example, identifying and mapping existing tools [ 32 ]. Here, it is important to understand that scoping reviews are not the same as ‘realist reviews’ which can also be used to examine how interventions or programmes work. Realist reviews are typically designed to ellucide the theories that underpin a programme, examine evidence to reveal if and how those theories are relevant and explain how the given programme works (or not) [ 33 ].
Increased demand for scoping reviews to underpin high-quality knowledge translation across many disciplines within and beyond healthcare in turn fuels the need for consistency, clarity and rigour in reporting; hence, following recognised reporting guidelines is a streamlined and effective way of introducing these elements [ 34 ]. Standardisation and clarity of reporting (such as by using a published methodology and a reporting checklist—the PRISMA-ScR) can facilitate better understanding and uptake of the results of scoping reviews by end users who are able to more clearly understand the differences between systematic reviews, scoping reviews and literature reviews and how their findings can be applied to research, practice and policy.
Future directions in scoping reviews
The field of evidence synthesis is dynamic. Scoping review methodology continues to evolve to account for the changing needs and priorities of end users and the requirements of review authors for additional guidance regarding terminology, elements and steps of scoping reviews. Areas where ongoing research and development of scoping review guidance are occurring include inclusion of consultation with stakeholder groups such as end users and consumer representatives [ 35 ], clarity on when scoping reviews are the appropriate method over other synthesis approaches [ 3 ], approaches for mapping and presenting results in ways that clearly address the review’s research objective(s) and question(s) [ 29 ] and the assessment of the methodological quality of scoping reviews themselves [ 21 , 36 ]. The JBI Scoping Review Methodology group is currently working on this research agenda.
Consulting with end users, experts, or stakeholders has been a suggested but optional component of scoping reviews since 2005. Many of the subsequent approaches contained some reference to this useful activity. Stakeholder engagement is however often lost to the term ‘review’ in scoping reviews. Stakeholder engagement is important across all knowledge synthesis approaches to ensure relevance, contextualisation and uptake of research findings. In fact, it underlines the concept of integrated knowledge translation [ 37 , 38 ]. By including stakeholder consultation in the scoping review process, the utility and uptake of results may be enhanced making reviews more meaningful to end users. Stakeholder consultation can also support integrating knowledge translation efforts, facilitate identifying emerging priorities in the field not otherwise captured in the literature and may help build partnerships amongst stakeholder groups including consumers, researchers, funders and end users. Development in the field of evidence synthesis overall could be inspired by the incorporation of stakeholder consultation in scoping reviews and lead to better integration of consultation and engagement within projects utilising other synthesis methodologies. This highlights how further work could be conducted into establishing how and the extent to which scoping reviews have contributed to synthesising evidence and advancing scientific knowledge and understandings in a more general sense.
Currently, many methodological papers for scoping reviews are published in healthcare focussed journals and associated disciplines [ 6 , 39 , 40 , 41 , 42 , 43 ]. Another area where further work could also occur is to gain greater understanding on how scoping reviews and scoping review methodology is being used across disciplines beyond healthcare including how authors, reviewers and editors understand, recommend or utilise existing guidance for undertaking and reporting scoping reviews.
Whilst available guidance for the conduct and reporting of scoping review has evolved over recent years, opportunities remain to further enhance and progress the methodology, uptake and application. Despite existing guidance, some publications using the term ‘scoping review’ continue to be conducted without apparent consideration of available reporting and methodological tools. Because consistent and transparent reporting is widely recongised as important for supporting rigour, reproducibility and quality in research, we advocate for authors to use a stated scoping review methodology and to transparently report their conduct by using the PRISMA-ScR. Selection of the most appropriate review type for the stated research objectives or questions, standardising the use of methodological approaches and terminology in scoping reviews, clarity and consistency of reporting and ensuring that the reporting and presentation of the results clearly addresses the authors’ objective(s) and question(s) are also critical components for improving the rigour of scoping reviews. We contend that whilst the field of evidence synthesis and scoping reviews continues to evolve, use of the PRISMA-ScR is a valuable and practical tool for enhancing the quality of scoping reviews, particularly in combination with other methodological guidance [ 10 , 12 , 44 ]. Scoping review methodology is developing as a policy and decision-making tool, and so ensuring the integrity of these reviews by adhering to the most up-to-date reporting standards is integral to supporting well informed decision-making. As scoping review methodology continues to evolve alongside understandings regarding why authors do or do not use particular methodologies, we hope that future incarnations of scoping review methodology continues to provide useful, high-quality evidence to end users.
Availability of data and materials
All data and materials are available upon request.
Pham MT, Rajić A, Greig JD, Sargeant JM, Papadopoulos A, McEwen SA. A scoping review of scoping reviews: advancing the approach and enhancing the consistency. Res Synth Methods. 2014;5(4):371–85.
Article Google Scholar
Tricco AC, Lillie E, Zarin W, et al. A scoping review on the conduct and reporting of scoping reviews. BMC Med Res Methodol. 2016;16:15.
Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18(1):143.
Peters M, Marnie C, Tricco A, et al. Updated methodological guidance for the conduct of scoping reviews. JBI Evid Synth. 2020;18(10):2119–26.
Paiva L, Dalmolin GL, Andolhe R, dos Santos W. Absenteeism of hospital health workers: scoping review. Av enferm. 2020;38(2):234–48.
Visonà MW, Plonsky L. Arabic as a heritage language: a scoping review. Int J Biling. 2019;24(4):599–615.
McKerricher L, Petrucka P. Maternal nutritional supplement delivery in developing countries: a scoping review. BMC Nutr. 2019;5(1):8.
Article CAS Google Scholar
Fusar-Poli P, Salazar de Pablo G, De Micheli A, et al. What is good mental health? A scoping review. Eur Neuropsychopharmacol. 2020;31:33–46.
Jowsey T, Foster G, Cooper-Ioelu P, Jacobs S. Blended learning via distance in pre-registration nursing education: a scoping review. Nurse Educ Pract. 2020;44:102775.
Peters MD, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB. Guidance for conducting systematic scoping reviews. Int J Evid-based Healthc. 2015;13(3):141–6.
Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.
Peters MDJ, Godfrey C, McInerney P, Munn Z, Tricco AC, Khalil H. Chapter 11: scoping reviews (2020 version). In: Aromataris E, Munn Z, editors. JBI manual for evidence synthesis: JBI; 2020.
Google Scholar
Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.
Levac D, Colquhoun H, O’Brien KK. Scoping studies: advancing the methodology. Implement Sci. 2010;5(1):69.
Miake-Lye IM, Hempel S, Shanman R, Shekelle PG. What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products. Syst Rev. 2016;5(1):28.
Sutton A, Clowes M, Preston L, Booth A. Meeting the review family: exploring review types and associated information retrieval requirements. Health Inf Libr J. 2019;36(3):202–22.
Brady BR, De La Rosa JS, Nair US, Leischow SJ. Electronic cigarette policy recommendations: a scoping review. Am J Health Behav. 2019;43(1):88–104.
Truman E, Elliott C. Identifying food marketing to teenagers: a scoping review. Int J Behav Nutr Phys Act. 2019;16(1):67.
Tricco AC, Antony J, Zarin W, et al. A scoping review of rapid review methods. BMC Med. 2015;13(1):224.
Moher D, Stewart L, Shekelle P. All in the family: systematic reviews, rapid reviews, scoping reviews, realist reviews, and more. Syst Rev. 2015;4(1):183.
Tricco AC, Zarin W, Ghassemi M, et al. Same family, different species: methodological conduct and quality varies according to purpose for five types of knowledge synthesis. J Clin Epidemiol. 2018;96:133–42.
Barker M, Adelson P, Peters MDJ, Steen M. Probiotics and human lactational mastitis: a scoping review. Women Birth. 2020;33(6):e483–e491.
O’Donnell N, Kappen DL, Fitz-Walter Z, Deterding S, Nacke LE, Johnson D. How multidisciplinary is gamification research? Results from a scoping review. Extended abstracts publication of the annual symposium on computer-human interaction in play. Amsterdam: Association for Computing Machinery; 2017. p. 445–52.
O’Flaherty J, Phillips C. The use of flipped classrooms in higher education: a scoping review. Internet High Educ. 2015;25:85–95.
Di Pasquale V, Miranda S, Neumann WP. Ageing and human-system errors in manufacturing: a scoping review. Int J Prod Res. 2020;58(15):4716–40.
Knowledge Synthesis Team. What review is right for you? 2019. https://whatreviewisrightforyou.knowledgetranslation.net/
Lv M, Luo X, Estill J, et al. Coronavirus disease (COVID-19): a scoping review. Euro Surveill. 2020;25(15):2000125.
Shemilt I, Simon A, Hollands GJ, et al. Pinpointing needles in giant haystacks: use of text mining to reduce impractical screening workload in extremely large scoping reviews. Res Synth Methods. 2014;5(1):31–49.
Khalil H, Bennett M, Godfrey C, McInerney P, Munn Z, Peters M. Evaluation of the JBI scoping reviews methodology by current users. Int J Evid-based Healthc. 2020;18(1):95–100.
Kennedy K, Adelson P, Fleet J, et al. Shared decision aids in pregnancy care: a scoping review. Midwifery. 2020;81:102589.
Dall’Ora C, Ball J, Recio-Saucedo A, Griffiths P. Characteristics of shift work and their impact on employee performance and wellbeing: a literature review. Int J Nurs Stud. 2016;57:12–27.
Feo R, Conroy T, Wiechula R, Rasmussen P, Kitson A. Instruments measuring behavioural aspects of the nurse–patient relationship: a scoping review. J Clin Nurs. 2020;29(11-12):1808–21.
Rycroft-Malone J, McCormack B, Hutchinson AM, et al. Realist synthesis: illustrating the method for implementation research. Implement Sci. 2012;7(1):33.
Colquhoun HL, Levac D, O’Brien KK, et al. Scoping reviews: time for clarity in definition, methods, and reporting. J Clin Epidemiol. 2014;67(12):1291–4.
Tricco AC, Zarin W, Rios P, et al. Engaging policy-makers, health system managers, and policy analysts in the knowledge synthesis process: a scoping review. Implement Sci. 2018;13(1):31.
Cooper S, Cant R, Kelly M, et al. An evidence-based checklist for improving scoping review quality. Clin Nurs Res. 2021;30(3):230–240.
Pollock A, Campbell P, Struthers C, et al. Stakeholder involvement in systematic reviews: a scoping review. Syst Rev. 2018;7(1):208.
Tricco AC, Zarin W, Rios P, Pham B, Straus SE, Langlois EV. Barriers, facilitators, strategies and outcomes to engaging policymakers, healthcare managers and policy analysts in knowledge synthesis: a scoping review protocol. BMJ Open. 2016;6(12):e013929.
Denton M, Borrego M. Funds of knowledge in STEM education: a scoping review. Stud Eng Educ. 2021;1(2):71–92.
Masta S, Secules S. When critical ethnography leaves the field and enters the engineering classroom: a scoping review. Stud Eng Educ. 2021;2(1):35–52.
Li Y, Marier-Bienvenue T, Perron-Brault A, Wang X, Pare G. Blockchain technology in business organizations: a scoping review. In: Proceedings of the 51st Hawaii international conference on system sciences ; 2018. https://core.ac.uk/download/143481400.pdf
Houlihan M, Click A, Wiley C. Twenty years of business information literacy research: a scoping review. Evid. Based Libr. Inf. Pract. 2020;15(4):124–163.
Plug I, Stommel W, Lucassen P, Hartman T, Van Dulmen S, Das E. Do women and men use language differently in spoken face-to-face interaction? A scoping review. Rev Commun Res. 2021;9:43–79.
McGowan J, Straus S, Moher D, et al. Reporting scoping reviews - PRISMA ScR extension. J Clin Epidemiol. 2020;123:177–9.
Download references
Acknowledgements
The authors would like to acknowledge the other members of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) working group as well as Shazia Siddiqui, a research assistant in the Knowledge Synthesis Team in the Knowledge Translation Program, Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto.
The authors declare that no specific funding was received for this work. Author ACT declares that she is funded by a Tier 2 Canada Research Chair in Knowledge Synthesis. KKO is supported by a Canada Research Chair in Episodic Disability and Rehabilitation with the Canada Research Chairs Program.
Author information
Authors and affiliations.
University of South Australia, UniSA Clinical and Health Sciences, Rosemary Bryant AO Research Centre, Playford Building P4-27, City East Campus, North Terrace, Adelaide, 5000, South Australia
Micah D. J. Peters & Casey Marnie
Adelaide Nursing School, Faculty of Health and Medical Sciences, The University of Adelaide, 101 Currie St, Adelaide, 5001, South Australia
Micah D. J. Peters
The Centre for Evidence-based Practice South Australia (CEPSA): a Joanna Briggs Institute Centre of Excellence, Faculty of Health and Medical Sciences, The University of Adelaide, 5006, Adelaide, South Australia
Department of Occupational Science and Occupational Therapy, University of Toronto, Terrence Donnelly Health Sciences Complex, 3359 Mississauga Rd, Toronto, Ontario, L5L 1C6, Canada
Heather Colquhoun
Rehabilitation Sciences Institute (RSI), University of Toronto, St. George Campus, 160-500 University Avenue, Toronto, Ontario, M5G 1V7, Canada
Heather Colquhoun & Kelly K. O’Brien
Knowledge Synthesis Group, Ottawa Hospital Research Institute, 1053 Carling Avenue, Ottawa, Ontario, K1Y 4E9, Canada
Chantelle M. Garritty
Southern California Evidence Review Center, University of Southern California, Los Angeles, CA, 90007, USA
Susanne Hempel
Royal College of Physicians and Surgeons of Canada, 774 Echo Drive, Ottawa, Ontario, K1S 5N8, Canada
Tanya Horsley
Partnership for Maternal, Newborn and Child Health (PMNCH), World Health Organisation, Avenue Appia 20, 1211, Geneva, Switzerland
Etienne V. Langlois
Sunnybrook Research Institute, 2075 Bayview Ave, Toronto, Ontario, M4N 3M5, Canada
Erin Lillie
Department of Physical Therapy, University of Toronto, St. George Campus, 160-500 University Avenue, Toronto, Ontario, M5G 1V7, Canada
Kelly K. O’Brien
Institute of Health Policy, Management and Evaluation (IHPME), University of Toronto, St. George Campus, 155 College Street 4th Floor, Toronto, Ontario, M5T 3M6, Canada
UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), Department of Sexual and Reproductive Health and Research, World Health Organisation, Avenue Appia 20, 1211, Geneva, Switzerland
Ӧzge Tunçalp
McMaster Health Forum, McMaster University, 1280 Main Street West, Hamilton, Ontario, L8S 4L8, Canada
Michael G. Wilson
Department of Health Evidence and Impact, McMaster University, 1280 Main Street West, Hamilton, Ontario, L8S 4L8, Canada
Centre for Health Economics and Policy Analysis, McMaster University, 1280 Main Street West, Hamilton, Ontario, L8S 4L8, Canada
Knowledge Translation Program, Li Ka Shing Knowledge Institute, St. Michael’s Hospital, Unity Health Toronto, 209 Victoria Street, East Building, Toronto, Ontario, M5B 1T8, Canada
Wasifa Zarin & Andrea C. Tricco
Epidemiology Division and Institute for Health Policy, Management, and Evaluation, Dalla Lana School of Public Health, University of Toronto, 155 College St, Room 500, Toronto, Ontario, M5T 3M7, Canada
Andrea C. Tricco
Queen’s Collaboration for Health Care Quality Joanna Briggs Institute Centre of Excellence, School of Nursing, Queen’s University, 99 University Ave, Kingston, Ontario, K7L 3N6, Canada
You can also search for this author in PubMed Google Scholar
Contributions
MDJP, CM, HC, CMG, SH, TH, EVL, EL, KKO, OT, MGW, WZ and AT all made substantial contributions to the conception, design and drafting of the work. MDJP and CM prepared the final version of the manuscript. All authors reviewed and approved the final version of the manuscript.
Corresponding author
Correspondence to Andrea C. Tricco .
Ethics declarations
Ethics approval and consent to participate.
Not applicable.
Consent for publication
Competing interests.
Author ACT is an Associate Editor for the journal. All other authors declare no conflicts of interest.
Additional information
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and permissions
About this article
Cite this article.
Peters, M.D.J., Marnie, C., Colquhoun, H. et al. Scoping reviews: reinforcing and advancing the methodology and application. Syst Rev 10 , 263 (2021). https://doi.org/10.1186/s13643-021-01821-3
Download citation
Received : 29 January 2021
Accepted : 27 September 2021
Published : 08 October 2021
DOI : https://doi.org/10.1186/s13643-021-01821-3
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Scoping reviews
- Evidence synthesis
- Research methodology
- Reporting guidelines
- Methodological guidance
Systematic Reviews
ISSN: 2046-4053
- Submission enquiries: Access here and click Contact Us
- General enquiries: [email protected]
What is Research Methodology? Definition, Types, and Examples
Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of the research. Several aspects must be considered before selecting an appropriate research methodology, such as research limitations and ethical concerns that may affect your research.
The research methodology section in a scientific paper describes the different methodological choices made, such as the data collection and analysis methods, and why these choices were selected. The reasons should explain why the methods chosen are the most appropriate to answer the research question. A good research methodology also helps ensure the reliability and validity of the research findings. There are three types of research methodology—quantitative, qualitative, and mixed-method, which can be chosen based on the research objectives.
What is research methodology ?
A research methodology describes the techniques and procedures used to identify and analyze information regarding a specific research topic. It is a process by which researchers design their study so that they can achieve their objectives using the selected research instruments. It includes all the important aspects of research, including research design, data collection methods, data analysis methods, and the overall framework within which the research is conducted. While these points can help you understand what is research methodology, you also need to know why it is important to pick the right methodology.
Having a good research methodology in place has the following advantages: 3
- Helps other researchers who may want to replicate your research; the explanations will be of benefit to them.
- You can easily answer any questions about your research if they arise at a later stage.
- A research methodology provides a framework and guidelines for researchers to clearly define research questions, hypotheses, and objectives.
- It helps researchers identify the most appropriate research design, sampling technique, and data collection and analysis methods.
- A sound research methodology helps researchers ensure that their findings are valid and reliable and free from biases and errors.
- It also helps ensure that ethical guidelines are followed while conducting research.
- A good research methodology helps researchers in planning their research efficiently, by ensuring optimum usage of their time and resources.
Writing the methods section of a research paper? Let Paperpal help you achieve perfection
Types of research methodology.
There are three types of research methodology based on the type of research and the data required. 1
- Quantitative research methodology focuses on measuring and testing numerical data. This approach is good for reaching a large number of people in a short amount of time. This type of research helps in testing the causal relationships between variables, making predictions, and generalizing results to wider populations.
- Qualitative research methodology examines the opinions, behaviors, and experiences of people. It collects and analyzes words and textual data. This research methodology requires fewer participants but is still more time consuming because the time spent per participant is quite large. This method is used in exploratory research where the research problem being investigated is not clearly defined.
- Mixed-method research methodology uses the characteristics of both quantitative and qualitative research methodologies in the same study. This method allows researchers to validate their findings, verify if the results observed using both methods are complementary, and explain any unexpected results obtained from one method by using the other method.
What are the types of sampling designs in research methodology?
Sampling 4 is an important part of a research methodology and involves selecting a representative sample of the population to conduct the study, making statistical inferences about them, and estimating the characteristics of the whole population based on these inferences. There are two types of sampling designs in research methodology—probability and nonprobability.
- Probability sampling
In this type of sampling design, a sample is chosen from a larger population using some form of random selection, that is, every member of the population has an equal chance of being selected. The different types of probability sampling are:
- Systematic —sample members are chosen at regular intervals. It requires selecting a starting point for the sample and sample size determination that can be repeated at regular intervals. This type of sampling method has a predefined range; hence, it is the least time consuming.
- Stratified —researchers divide the population into smaller groups that don’t overlap but represent the entire population. While sampling, these groups can be organized, and then a sample can be drawn from each group separately.
- Cluster —the population is divided into clusters based on demographic parameters like age, sex, location, etc.
- Convenience —selects participants who are most easily accessible to researchers due to geographical proximity, availability at a particular time, etc.
- Purposive —participants are selected at the researcher’s discretion. Researchers consider the purpose of the study and the understanding of the target audience.
- Snowball —already selected participants use their social networks to refer the researcher to other potential participants.
- Quota —while designing the study, the researchers decide how many people with which characteristics to include as participants. The characteristics help in choosing people most likely to provide insights into the subject.
What are data collection methods?
During research, data are collected using various methods depending on the research methodology being followed and the research methods being undertaken. Both qualitative and quantitative research have different data collection methods, as listed below.
Qualitative research 5
- One-on-one interviews: Helps the interviewers understand a respondent’s subjective opinion and experience pertaining to a specific topic or event
- Document study/literature review/record keeping: Researchers’ review of already existing written materials such as archives, annual reports, research articles, guidelines, policy documents, etc.
- Focus groups: Constructive discussions that usually include a small sample of about 6-10 people and a moderator, to understand the participants’ opinion on a given topic.
- Qualitative observation : Researchers collect data using their five senses (sight, smell, touch, taste, and hearing).
Quantitative research 6
- Sampling: The most common type is probability sampling.
- Interviews: Commonly telephonic or done in-person.
- Observations: Structured observations are most commonly used in quantitative research. In this method, researchers make observations about specific behaviors of individuals in a structured setting.
- Document review: Reviewing existing research or documents to collect evidence for supporting the research.
- Surveys and questionnaires. Surveys can be administered both online and offline depending on the requirement and sample size.
Let Paperpal help you write the perfect research methods section. Start now!
What are data analysis methods.
The data collected using the various methods for qualitative and quantitative research need to be analyzed to generate meaningful conclusions. These data analysis methods 7 also differ between quantitative and qualitative research.
Quantitative research involves a deductive method for data analysis where hypotheses are developed at the beginning of the research and precise measurement is required. The methods include statistical analysis applications to analyze numerical data and are grouped into two categories—descriptive and inferential.
Descriptive analysis is used to describe the basic features of different types of data to present it in a way that ensures the patterns become meaningful. The different types of descriptive analysis methods are:
- Measures of frequency (count, percent, frequency)
- Measures of central tendency (mean, median, mode)
- Measures of dispersion or variation (range, variance, standard deviation)
- Measure of position (percentile ranks, quartile ranks)
Inferential analysis is used to make predictions about a larger population based on the analysis of the data collected from a smaller population. This analysis is used to study the relationships between different variables. Some commonly used inferential data analysis methods are:
- Correlation: To understand the relationship between two or more variables.
- Cross-tabulation: Analyze the relationship between multiple variables.
- Regression analysis: Study the impact of independent variables on the dependent variable.
- Frequency tables: To understand the frequency of data.
- Analysis of variance: To test the degree to which two or more variables differ in an experiment.
Qualitative research involves an inductive method for data analysis where hypotheses are developed after data collection. The methods include:
- Content analysis: For analyzing documented information from text and images by determining the presence of certain words or concepts in texts.
- Narrative analysis: For analyzing content obtained from sources such as interviews, field observations, and surveys. The stories and opinions shared by people are used to answer research questions.
- Discourse analysis: For analyzing interactions with people considering the social context, that is, the lifestyle and environment, under which the interaction occurs.
- Grounded theory: Involves hypothesis creation by data collection and analysis to explain why a phenomenon occurred.
- Thematic analysis: To identify important themes or patterns in data and use these to address an issue.
How to choose a research methodology?
Here are some important factors to consider when choosing a research methodology: 8
- Research objectives, aims, and questions —these would help structure the research design.
- Review existing literature to identify any gaps in knowledge.
- Check the statistical requirements —if data-driven or statistical results are needed then quantitative research is the best. If the research questions can be answered based on people’s opinions and perceptions, then qualitative research is most suitable.
- Sample size —sample size can often determine the feasibility of a research methodology. For a large sample, less effort- and time-intensive methods are appropriate.
- Constraints —constraints of time, geography, and resources can help define the appropriate methodology.
Got writer’s block? Kickstart your research paper writing with Paperpal now!
How to write a research methodology .
A research methodology should include the following components: 3,9
- Research design —should be selected based on the research question and the data required. Common research designs include experimental, quasi-experimental, correlational, descriptive, and exploratory.
- Research method —this can be quantitative, qualitative, or mixed-method.
- Reason for selecting a specific methodology —explain why this methodology is the most suitable to answer your research problem.
- Research instruments —explain the research instruments you plan to use, mainly referring to the data collection methods such as interviews, surveys, etc. Here as well, a reason should be mentioned for selecting the particular instrument.
- Sampling —this involves selecting a representative subset of the population being studied.
- Data collection —involves gathering data using several data collection methods, such as surveys, interviews, etc.
- Data analysis —describe the data analysis methods you will use once you’ve collected the data.
- Research limitations —mention any limitations you foresee while conducting your research.
- Validity and reliability —validity helps identify the accuracy and truthfulness of the findings; reliability refers to the consistency and stability of the results over time and across different conditions.
- Ethical considerations —research should be conducted ethically. The considerations include obtaining consent from participants, maintaining confidentiality, and addressing conflicts of interest.
Streamline Your Research Paper Writing Process with Paperpal
The methods section is a critical part of the research papers, allowing researchers to use this to understand your findings and replicate your work when pursuing their own research. However, it is usually also the most difficult section to write. This is where Paperpal can help you overcome the writer’s block and create the first draft in minutes with Paperpal Copilot, its secure generative AI feature suite.
With Paperpal you can get research advice, write and refine your work, rephrase and verify the writing, and ensure submission readiness, all in one place. Here’s how you can use Paperpal to develop the first draft of your methods section.
- Generate an outline: Input some details about your research to instantly generate an outline for your methods section
- Develop the section: Use the outline and suggested sentence templates to expand your ideas and develop the first draft.
- P araph ras e and trim : Get clear, concise academic text with paraphrasing that conveys your work effectively and word reduction to fix redundancies.
- Choose the right words: Enhance text by choosing contextual synonyms based on how the words have been used in previously published work.
- Check and verify text : Make sure the generated text showcases your methods correctly, has all the right citations, and is original and authentic. .
You can repeat this process to develop each section of your research manuscript, including the title, abstract and keywords. Ready to write your research papers faster, better, and without the stress? Sign up for Paperpal and start writing today!
Frequently Asked Questions
Q1. What are the key components of research methodology?
A1. A good research methodology has the following key components:
- Research design
- Data collection procedures
- Data analysis methods
- Ethical considerations
Q2. Why is ethical consideration important in research methodology?
A2. Ethical consideration is important in research methodology to ensure the readers of the reliability and validity of the study. Researchers must clearly mention the ethical norms and standards followed during the conduct of the research and also mention if the research has been cleared by any institutional board. The following 10 points are the important principles related to ethical considerations: 10
- Participants should not be subjected to harm.
- Respect for the dignity of participants should be prioritized.
- Full consent should be obtained from participants before the study.
- Participants’ privacy should be ensured.
- Confidentiality of the research data should be ensured.
- Anonymity of individuals and organizations participating in the research should be maintained.
- The aims and objectives of the research should not be exaggerated.
- Affiliations, sources of funding, and any possible conflicts of interest should be declared.
- Communication in relation to the research should be honest and transparent.
- Misleading information and biased representation of primary data findings should be avoided.
Q3. What is the difference between methodology and method?
A3. Research methodology is different from a research method, although both terms are often confused. Research methods are the tools used to gather data, while the research methodology provides a framework for how research is planned, conducted, and analyzed. The latter guides researchers in making decisions about the most appropriate methods for their research. Research methods refer to the specific techniques, procedures, and tools used by researchers to collect, analyze, and interpret data, for instance surveys, questionnaires, interviews, etc.
Research methodology is, thus, an integral part of a research study. It helps ensure that you stay on track to meet your research objectives and answer your research questions using the most appropriate data collection and analysis tools based on your research design.
Accelerate your research paper writing with Paperpal. Try for free now!
- Research methodologies. Pfeiffer Library website. Accessed August 15, 2023. https://library.tiffin.edu/researchmethodologies/whatareresearchmethodologies
- Types of research methodology. Eduvoice website. Accessed August 16, 2023. https://eduvoice.in/types-research-methodology/
- The basics of research methodology: A key to quality research. Voxco. Accessed August 16, 2023. https://www.voxco.com/blog/what-is-research-methodology/
- Sampling methods: Types with examples. QuestionPro website. Accessed August 16, 2023. https://www.questionpro.com/blog/types-of-sampling-for-social-research/
- What is qualitative research? Methods, types, approaches, examples. Researcher.Life blog. Accessed August 15, 2023. https://researcher.life/blog/article/what-is-qualitative-research-methods-types-examples/
- What is quantitative research? Definition, methods, types, and examples. Researcher.Life blog. Accessed August 15, 2023. https://researcher.life/blog/article/what-is-quantitative-research-types-and-examples/
- Data analysis in research: Types & methods. QuestionPro website. Accessed August 16, 2023. https://www.questionpro.com/blog/data-analysis-in-research/#Data_analysis_in_qualitative_research
- Factors to consider while choosing the right research methodology. PhD Monster website. Accessed August 17, 2023. https://www.phdmonster.com/factors-to-consider-while-choosing-the-right-research-methodology/
- What is research methodology? Research and writing guides. Accessed August 14, 2023. https://paperpile.com/g/what-is-research-methodology/
- Ethical considerations. Business research methodology website. Accessed August 17, 2023. https://research-methodology.net/research-methodology/ethical-considerations/
Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.
Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.
Experience the future of academic writing – Sign up to Paperpal and start writing for free!
Related Reads:
- Dangling Modifiers and How to Avoid Them in Your Writing
- Research Outlines: How to Write An Introduction Section in Minutes with Paperpal Copilot
- How to Paraphrase Research Papers Effectively
- What is a Literature Review? How to Write It (with Examples)
Language and Grammar Rules for Academic Writing
Climatic vs. climactic: difference and examples, you may also like, what are citation styles which citation style to..., what are the types of literature reviews , abstract vs introduction: what is the difference , mla format: guidelines, template and examples , machine translation vs human translation: which is reliable..., dissertation printing and binding | types & comparison , what is a dissertation preface definition and examples , how to write a research proposal: (with examples..., how to write your research paper in apa..., how to choose a dissertation topic.
IMAGES
VIDEO
COMMENTS
This is why the literature review as a research method is more relevant than ever. Traditional literature reviews often lack thoroughness and rigor and are conducted ad hoc, rather than following a specific methodology. Therefore, questions can be raised about the quality and trustworthiness of these types of reviews.
Example literature review #2: "Literature review as a research methodology: ... Literature review research question example What is the impact of social media on body image among Generation Z? Make a list of keywords. Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you're ...
The literature review can serve various functions in the contexts of education and research. It aids in identifying knowledge gaps, informing research methodology, and developing a theoretical framework during the planning stages of a research study or project, as well as reporting of review findings in the context of the existing literature.
Background Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that ...
A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...
To review systematically research studies, which have investigated the impact of gray literature in meta‐analyses of randomized trials of health care interventions. ... Garside R. A comparison of results of empirical studies of supplementary search techniques and recommendations in review methodology handbooks: a methodological review. Syst ...
Keywords: A literature review, Research Methodology, Methods of Review . Process. 1. Introduction. In all studies, the literature r eview is a significant consideration as well as it is an.
Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question.
A literature review is a review and synthesis of existing research on a topic or research question. A literature review is meant to analyze the scholarly literature, make connections across writings and identify strengths, weaknesses, trends, and missing conversations. A literature review should address different aspects of a topic as it ...
A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure [12].An SLR updates the reader with current literature about a subject [6].The goal is to review critical points of current knowledge on a topic about research ...
Writing a literature review requires a range of skills to gather, sort, evaluate and summarise peer-reviewed published data into a relevant and informative unbiased narrative. Digital access to research papers, academic texts, review articles, reference databases and public data sets are all sources of information that are available to enrich ...
A literature review is a survey of scholarly sources that establishes familiarity with and an understanding of current research in a particular field. It includes a critical analysis of the relationship among different works, seeking a synthesis and an explanation of gaps, while relating findings to the project at hand.
Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...
The objective of a Literature Review is to find previous published scholarly works relevant to an specific topic. A literature review is important because it: Explains the background of research on a topic. Demonstrates why a topic is significant to a subject area. Discovers relationships between research studies/ideas.
A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research ...
A literature or narrative review is a comprehensive review and analysis of the published literature on a specific topic or research question. The literature that is reviewed contains: books, articles, academic articles, conference proceedings, association papers, and dissertations. It contains the most pertinent studies and points to important ...
9.3. Types of Review Articles and Brief Illustrations. EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic.
Research methodology is the specific strategies, processes, or techniques utilised in the collection of information that is created and analysed. The methodology section of a research paper, or thesis, enables the reader to critically evaluate the study's validity and reliability by addressing how the data was collected or generated, and how ...
Scoping reviews are an increasingly common approach to evidence synthesis with a growing suite of methodological guidance and resources to assist review authors with their planning, conduct and reporting. The latest guidance for scoping reviews includes the JBI methodology and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses—Extension for Scoping Reviews. This paper ...
Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of the research. ... Document review ...