Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Nuffield Department of Primary Care Health Sciences, University of Oxford

Critical Appraisal tools

Critical appraisal worksheets to help you appraise the reliability, importance and applicability of clinical evidence.

Critical appraisal is the systematic evaluation of clinical research papers in order to establish:

  • Does this study address a  clearly focused question ?
  • Did the study use valid methods to address this question?
  • Are the valid results of this study important?
  • Are these valid, important results applicable to my patient or population?

If the answer to any of these questions is “no”, you can save yourself the trouble of reading the rest of it.

This section contains useful tools and downloads for the critical appraisal of different types of medical evidence. Example appraisal sheets are provided together with several helpful examples.

Critical Appraisal Worksheets

  • Systematic Reviews  Critical Appraisal Sheet
  • Diagnostics  Critical Appraisal Sheet
  • Prognosis  Critical Appraisal Sheet
  • Randomised Controlled Trials  (RCT) Critical Appraisal Sheet
  • Critical Appraisal of Qualitative Studies  Sheet
  • IPD Review  Sheet

Chinese - translated by Chung-Han Yang and Shih-Chieh Shao

  • Systematic Reviews  Critical Appraisal Sheet
  • Diagnostic Study  Critical Appraisal Sheet
  • Prognostic Critical Appraisal Sheet
  • RCT  Critical Appraisal Sheet
  • IPD reviews Critical Appraisal Sheet
  • Qualitative Studies Critical Appraisal Sheet 

German - translated by Johannes Pohl and Martin Sadilek

  • Systematic Review  Critical Appraisal Sheet
  • Diagnosis Critical Appraisal Sheet
  • Prognosis Critical Appraisal Sheet
  • Therapy / RCT Critical Appraisal Sheet

Lithuanian - translated by Tumas Beinortas

  • Systematic review appraisal Lithuanian (PDF)
  • Diagnostic accuracy appraisal Lithuanian  (PDF)
  • Prognostic study appraisal Lithuanian  (PDF)
  • RCT appraisal sheets Lithuanian  (PDF)

Portugese - translated by Enderson Miranda, Rachel Riera and Luis Eduardo Fontes

  • Portuguese – Systematic Review Study Appraisal Worksheet
  • Portuguese – Diagnostic Study Appraisal Worksheet
  • Portuguese – Prognostic Study Appraisal Worksheet
  • Portuguese – RCT Study Appraisal Worksheet
  • Portuguese – Systematic Review Evaluation of Individual Participant Data Worksheet
  • Portuguese – Qualitative Studies Evaluation Worksheet

Spanish - translated by Ana Cristina Castro

  • Systematic Review  (PDF)
  • Diagnosis  (PDF)
  • Prognosis  Spanish Translation (PDF)
  • Therapy / RCT  Spanish Translation (PDF)

Persian - translated by Ahmad Sofi Mahmudi

  • Prognosis  (PDF)
  • PICO  Critical Appraisal Sheet (PDF)
  • PICO Critical Appraisal Sheet (MS-Word)
  • Educational Prescription  Critical Appraisal Sheet (PDF)

Explanations & Examples

  • Pre-test probability
  • SpPin and SnNout
  • Likelihood Ratios

CASP Checklists

  • How to use our CASP Checklists
  • Referencing and Creative Commons
  • Online Training Courses
  • CASP Workshops
  • What is Critical Appraisal
  • Study Designs
  • Useful Links
  • Bibliography
  • View all Tools and Resources
  • Testimonials

Critical appraisal tools and resources

CASP has produced simple critical appraisal checklists for the key study designs. These are not meant to replace considered thought and judgement when reading a paper but are for use as a guide and aide memoire. All CASP checklists cover three main areas: validity , results and clinical relevance.

What is Critical Appraisal?

Critical Appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context. It is an essential skill for evidence-based medicine because it allows people to find and use research evidence reliably and efficiently.

Learn more about what critical appraisal is, why we need it and more

A complete list (published & unpublished) of articles and research papers about CASP and other critical appraisal tools and approaches, covering from 1993-2012.

  • CASP Checklist

Need more information?

  • Online Learning
  • Privacy Policy

critical appraisal tools for research articles

Critical Appraisal Skills Programme

Critical Appraisal Skills Programme (CASP) will use the information you provide on this form to be in touch with you and to provide updates and marketing. Please let us know all the ways you would like to hear from us:

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.

Copyright 2024 CASP UK - OAP Ltd. All rights reserved Website by Beyond Your Brand

Critical Appraisal: Assessing the Quality of Studies

  • First Online: 05 August 2020

Cite this chapter

critical appraisal tools for research articles

  • Edward Purssell   ORCID: orcid.org/0000-0003-3748-0864 3 &
  • Niall McCrae   ORCID: orcid.org/0000-0001-9776-7694 4  

8483 Accesses

There is great variation in the type and quality of research evidence. Having completed your search and assembled your studies, the next step is to critically appraise the studies to ascertain their quality. Ultimately you will be making a judgement about the overall evidence, but that comes later. You will see throughout this chapter that we make a clear differentiation between the individual studies and what we call the body of evidence , which is all of the studies and anything else that we use to answer the question or to make a recommendation. This chapter deals with only the first of these—the individual studies. Critical appraisal, like everything else in systematic literature reviewing, is a scientific exercise that requires individual judgement, and we describe some tools to help you.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Oxford Centre for Evidence-Based Medicine (OCEBM) (2016) OCEBM levels of evidence. In: CEBM. https://www.cebm.net/2016/05/ocebm-levels-of-evidence/ . Accessed 17 Apr 2020

Aromataris E, Munn Z (eds) (2017) Joanna Briggs Institute reviewer’s manual. The Joanna Briggs Institute, Adelaide

Google Scholar  

Daly J, Willis K, Small R et al (2007) A hierarchy of evidence for assessing qualitative health research. J Clin Epidemiol 60:43–49. https://doi.org/10.1016/j.jclinepi.2006.03.014

Article   PubMed   Google Scholar  

EQUATOR Network (2020) What is a reporting guideline?—The EQUATOR Network. https://www.equator-network.org/about-us/what-is-a-reporting-guideline/ . Accessed 7 Mar 2020

Tong A, Sainsbury P, Craig J (2007) Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care 19:349–357. https://doi.org/10.1093/intqhc/mzm042

von Elm E, Altman DG, Egger M et al (2007) The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. PLoS Med 4:e296. https://doi.org/10.1371/journal.pmed.0040296

Article   Google Scholar  

Brouwers MC, Kerkvliet K, Spithoff K, AGREE Next Steps Consortium (2016) The AGREE reporting checklist: a tool to improve reporting of clinical practice guidelines. BMJ 352:i1152. https://doi.org/10.1136/bmj.i1152

Article   PubMed   PubMed Central   Google Scholar  

Moher D, Liberati A, Tetzlaff J et al (2009) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 6:e1000097. https://doi.org/10.1371/journal.pmed.1000097

Boutron I, Page MJ, Higgins JPT, Altman DG, Lundh A, Hróbjartsson A (2019) Chapter 7: Considering bias and conflicts of interest among the included studies. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds). Cochrane Handbook for Systematic Reviews of Interventions version 6.0 (updated July 2019), Cochrane. https://www.training.cochrane.org/handbook

Critical Appraisal Skills Programme (2018) CASP checklists. In: CASP—critical appraisal skills programme. https://casp-uk.net/casp-tools-checklists/ . Accessed 7 Mar 2020

Higgins JPT, Savović J, Page MJ et al (2019) Chapter 8: Assessing risk of bias in a randomized trial. In: Higgins JPT, Thomas J, Chandler J et al (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane, London

Chapter   Google Scholar  

Guyatt GH, Oxman AD, Kunz R et al (2011) GRADE guidelines 6. Rating the quality of evidence—imprecision. J Clin Epidemiol 64:1283–1293. https://doi.org/10.1016/j.jclinepi.2011.01.012

Sterne JAC, Savović J, Page MJ et al (2019) RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ 366:l4898. https://doi.org/10.1136/bmj.l4898

Sterne JA, Hernán MA, Reeves BC et al (2016) ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 355:i4919. https://doi.org/10.1136/bmj.i4919

Wells GA, Shea B, O’Connell D et al (2019) The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses. Ottawa Hospital Research Institute, Ottawa. http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp . Accessed 7 Mar 2020

Cochrane Community (2020) Glossary—Cochrane community. https://community.cochrane.org/glossary#letter-R . Accessed 8 Mar 2020

Messick S (1989) Meaning and values in test validation: the science and ethics of assessment. Educ Res 18:5–11. https://doi.org/10.3102/0013189X018002005

Sparkes AC (2001) Myth 94: qualitative health researchers will agree about validity. Qual Health Res 11:538–552. https://doi.org/10.1177/104973230101100409

Article   CAS   PubMed   Google Scholar  

Aguinis H, Solarino AM (2019) Transparency and replicability in qualitative research: the case of interviews with elite informants. Strat Manag J 40:1291–1315. https://doi.org/10.1002/smj.3015

Lincoln YS, Guba EG (1985) Naturalistic inquiry. Sage Publications, Beverly Hills, CA

Book   Google Scholar  

Hannes K (2011) Chapter 4: Critical appraisal of qualitative research. In: Noyes J, Booth A, Hannes K et al (eds) Supplementary guidance for inclusion of qualitative research in Cochrane systematic reviews of interventions. Cochrane Collaboration Qualitative Methods Group, London

Munn Z, Porritt K, Lockwood C et al (2014) Establishing confidence in the output of qualitative research synthesis: the ConQual approach. BMC Med Res Methodol 14:108. https://doi.org/10.1186/1471-2288-14-108

Toye F, Seers K, Allcock N et al (2013) ‘Trying to pin down jelly’—exploring intuitive processes in quality assessment for meta-ethnography. BMC Med Res Methodol 13:46. https://doi.org/10.1186/1471-2288-13-46

Katikireddi SV, Egan M, Petticrew M (2015) How do systematic reviews incorporate risk of bias assessments into the synthesis of evidence? A methodological study. J Epidemiol Community Health 69:189–195. https://doi.org/10.1136/jech-2014-204711

McKenzie JE, Brennan SE, Ryan RE et al (2019) Chapter 9: Summarizing study characteristics and preparing for synthesis. In: Higgins JPT, Thomas J, Chandler J et al (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane, London

Deeks JJ, Higgins JPT, Altman DG (2019) Chapter 10: Analysing data and undertaking meta-analyses. In: Higgins JPT, Thomas J, Chandler J et al (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane, London

Download references

Author information

Authors and affiliations.

School of Health Sciences, City, University of London, London, UK

Edward Purssell

Florence Nightingale Faculty of Nursing, Midwifery & Palliative Care, King’s College London, London, UK

Niall McCrae

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Edward Purssell .

Rights and permissions

Reprints and permissions

Copyright information

© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Purssell, E., McCrae, N. (2020). Critical Appraisal: Assessing the Quality of Studies. In: How to Perform a Systematic Literature Review. Springer, Cham. https://doi.org/10.1007/978-3-030-49672-2_6

Download citation

DOI : https://doi.org/10.1007/978-3-030-49672-2_6

Published : 05 August 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-49671-5

Online ISBN : 978-3-030-49672-2

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Open access
  • Published: 26 December 2022

CAT HPPR: a critical appraisal tool to assess the quality of systematic, rapid, and scoping reviews investigating interventions in health promotion and prevention

  • Thomas L. Heise 1 , 2 ,
  • Andreas Seidler 3 ,
  • Maria Girbig 3 ,
  • Alice Freiberg 3 ,
  • Adrienne Alayli 4 ,
  • Maria Fischer 5 ,
  • Wolfgang Haß 5 &
  • Hajo Zeeb 1 , 2  

BMC Medical Research Methodology volume  22 , Article number:  334 ( 2022 ) Cite this article

4232 Accesses

3 Citations

5 Altmetric

Metrics details

For over three decades researchers have developed critical appraisal tools (CATs) for assessing the scientific quality of research overviews. Most established CATs for reviews in evidence-based medicine and evidence-based public health (EBPH) focus on systematic reviews (SRs) with studies on experimental interventions or exposure included. EBPH- and implementation-oriented organisations and decision-makers, however, often seek access to rapid reviews (RRs) or scoping reviews (ScRs) for rapid evidence synthesis and research field exploration. Until now, no CAT is available to assess the quality of SRs, RRs, and ScRs following a unified approach. We set out to develop such a CAT.

The development process of the Critical Appraisal Tool for Health Promotion and Prevention Reviews (CAT HPPR) included six phases: (i) the definition of important review formats and complementary approaches, (ii) the identification of relevant CATs, (iii) prioritisation, selection and adaptation of quality criteria using a consensus approach, (iv) development of the rating system and bilingual guidance documents, (v) engaging with experts in the field for piloting/optimising the CAT, and (vi) approval of the final CAT. We used a pragmatic search approach to identify reporting guidelines/standards ( n  = 3; e.g. PRISMA, MECIR) as well as guidance documents ( n  = 17; e.g. for reviews with mixed-methods approach) to develop working definitions for SRs, RRs, ScRs, and other review types (esp. those defined by statistical methods or included data sources).

We successfully identified 14 relevant CATs, predominantly for SRs (e.g. AMSTAR 2), and extracted 46 items. Following consensual discussions 15 individual criteria were included in our CAT and tailored to the review types of interest. The CAT was piloted with 14 different reviews which were eligible to be included in a new German database looking at interventions in health promotion and prevention in different implementation settings.

Conclusions

The newly developed CAT HPPR follows a unique uniformed approach to assess a set of heterogeneous reviews (e.g. reviews from problem identification to policy evaluations) to assist end-users needs. Feedback of external experts showed general feasibility and satisfaction with the tool. Future studies should further formally test the validity of CAT HPPR using larger sets of reviews.

Peer Review reports

Reviews in health promotion and prevention research are primarily used to summarise, analyse, and assess various evidence sources for answering a particular research question and help to overcome the know-do gap in different implementation settings [ 1 , 2 , 3 ]. In particular, when there is a rapid increase in both primary research and other sources of information within a research field or when conflicting review results make conclusions less definitive, readers need assurance that the methods used in a review are designed to minimise bias [ 4 ]. Well-established review guidelines and standardised procedures can be used to increase both the quality and content of reviews [ 1 , 2 , 5 , 6 , 7 , 8 , 9 ]. Their uptake among researchers was profoundly influenced by the work of coordinated reporting guideline initiatives such as the EQUATOR network [ 10 ]. The PRISMA guideline, updated in 2020, is one of the most cited guidelines aiming to set minimal reporting standards. It provides guidance on how to transparently report review sections such as abstract, the search and synthesis methods to increase reproducibility, replicability and confidence in a review [ 8 ].

In addition to this development a separate line of research, first mentioned in the medical context more than three decades ago, established: the development and application of Critical Appraisal Tools (CATs), which is devoted to explore ways of assessing the quality of reviews [ 11 , 12 , 13 ]. Beyond being able to draw conclusions on the overall reporting quality of a review, the aim of using a CAT is to transparently and objectively assess the selection and application of adequate review methods and to identify major methodological shortcomings or bias, including the appropriateness of review conclusions. Covering methodological aspects of different review steps, this process can lead to an overall rating regarding the general quality of a review report [ 14 , 15 ]. Various CATs have been developed over time for different fields of application and target audiences. They mainly differ in the degree of manualisation (i.e. guidance documents with further explanations to reach objective ratings), type of questions (e.g. open or closed), type of answers (including the number of answer options), usage of overall scores, and the effort and time required for completion [ 11 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 ]. The area of application of CATs can be further classified into scientific use cases [ 15 , 38 ], education purposes [ 20 , 26 ] or guideline development [ 7 , 30 , 33 ]. We developed a new CAT for Health Promotion and Prevention Reviews (CAT HPPR) primarily to provide assessments for different review types found in a review database for health promotion and prevention. In the past, a similar approach has been taken by researchers of healthevidence.org , a curated review database for systematic reviews [ 14 ].

The “GKV-Bündnis für Gesundheit”, a joint initiative of all statutory health insurance funds for developing and implementing setting-based health promotion and prevention measures, commissioned a series of reviews on strategies and interventions for health promotion and disease prevention. Reviews focused on settings such as early child care, schools and the local community. Final reports were later compiled in a review database (“Knowledge for Healthy Settings”) and are available to the public ( https://www.gkv-buendnis.de/forschung-im-buendnis/datenbank-wissen-fuer-gesunde-lebenswelten/ ). The commissioned reviews covered a broad range of topics and focused on different evidence sources (e.g. studies, case or best practice reports etc.) to be included. Led by decisions on content or scope specified by the “GKV-Bündnis für Gesundheit” to support health policy decision-making, the type of review and methods selected by the author teams were not all the same [ 40 ]. This mix in applied methods can also be seen in reviews published by international journals in the field of health promotion and prevention [ 41 ]. Existing CATs are predominantly designed to assess systematic reviews, and were not well suited to reflect on and evaluate unique key aspects of some emerging review types (e.g. scoping reviews) and complementary approaches (e.g. mixed-methods: integration of quantitative and qualitative data) which were part of this review collection [ 14 , 15 , 40 ]. More specifically, during our background search for published CATs, we also could not identify CATs which were exclusively designed to be used for assessing scoping reviews or rapid reviews. As a result, no standardised approach to critical appraise different review types existed at that time, which left end-users of reviews (i.e. users of curated review databases, guideline developers etc.) with imperfect solutions whenever a critical assessment for different review types was required. Either CATs for systematic reviews had to be used across different review types, where many criteria remained not assessable/applicable (e.g. appraisal of synthesis methods for Review of Reviews using AMSTAR) or a critical assessment of emerging review approaches was not undertaken — which left critical review evidence unassessed.

The lack of a well-documented CAT for simultaneous assessment of various reviews types motivated the development of this new tool. The goals of the project were: (i) to develop working definitions of review types in order to set the scope for the CAT HPPR, (ii) to develop an appraisal tool based on key criteria of existing CATs (e.g. healthevidence.org , AMSTAR 2) including a manual for end-users and (iii) to pilot the tool with a set of available reviews funded by the “GKV-Bündnis für Gesundheit”.

The tool development process of CAT HPPR was pre-determined by a research protocol in German language (see Availability of data and materials). The reporting on the development process of CAT HPPR is informed by recommendations defined by Whiting et al. for developing quality assessment tools (“Stage 2: tool development”) [ 42 ]. Documentation of the search and selection process to identify and select CATs, reporting guidelines and items is appended to this article.

Search for retrieving reporting guidelines and standards to define review types

For developing the CAT HPPR, we first used a pragmatic search in July 2019 of relevant websites (including the EQUATOR Network, Cochrane, JBI), an electronic database (Medline) and references provided by members of the larger project team in order to identify relevant reporting guidelines/standards ( n  = 3) [ 1 , 2 , 9 ]. We then collated further guidance documents for conducting a review within the scope of the tool (systematic reviews, rapid reviews, scoping reviews) and optional complementary review approaches (as review of reviews (also known as overview of reviews), with mixed-methods-approach, with meta-analysis) ( n  = 17) [ 5 , 6 , 7 , 41 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 ].

Consensus approach to define review types

Based on identified documents, narrow working definitions for review types were developed and further refined, involving all project partners (tool developers, members of a project-specific reviewer pool piloting the novel CAT). Tool developers comprised all authors of this article ( n  = 8), whereas members of the reviewer pool ( n  = 3) were experienced review authors with methodological knowledge beyond systematic and Cochrane reviews recruited and commissioned by the “GKV-Bündnis für Gesundheit” (see Acknowledgements). Given the lack of consensus regarding the different types of reviews and their complementary approaches in the scientific literature, this step was crucial for achieving better applicability of the to-be-developed appraisal tool, its criteria and the global rating algorithm. Methodologically less narrowly defined types of reviews (e.g. overviews) or those that had a very large overlap with the types and approaches we had defined already (e.g. mapping reviews, umbrella reviews) were not considered separately [ 41 ].

Search for retrieving CATs to inform items

As a further step towards identifying and tailoring relevant content of pre-existing CAT and their criteria for our tool, we carried out an electronic search using the same approach (i.e. Medline, websites, literature provided by project partners) as we did for reporting standards. Inclusion criteria for CATs were defined as follows: CAT originally developed for review articles, question/item-based CAT, CAT applicable to general medical or health topics, and CAT with corresponding guidance documents readily available.

Compiling initial list of items for inclusion

We assessed 30 full-texts of CATs and other review evaluation instruments for eligibility. Excluded CATs were not exclusively developed to assess reviews ( n  = 1 [ 31 ];), mainly developed for training of practitioners ( n  = 3 [ 18 , 19 , 23 ];), developed for a certain medical field ( n  = 1 [ 25 ];), had no or limited guidance available ( n  = 3 [ 11 , 21 , 32 ];), were developed to assess the relevance of review findings ( n  = 2 [ 16 , 26 ];), or were not considered for data extraction as the main report suggested strong overlap with another established CAT ( n  = 2 [ 22 , 27 ];). As a result, 14 CATs [ 13 , 14 , 15 , 20 , 24 , 28 , 29 , 30 , 33 , 34 , 35 , 36 , 37 , 38 ] based on 18 reports were finally considered eligible for item identification. Included CATs were mainly developed for the quality assessment of systematic reviews. Since the CAT of healthevidence.org shared the most similar aim and content with our to-be-developed CAT [ 14 ], individual criteria of this tool were first extracted and compared to extracted criteria from the remaining 13 CATs. Extracted data was checked by a second tool developer. Criteria with the same wording or content across different CATs were removed.

Initial items and scope

A review process of all criteria, including discussion among and consensus decisions by tool developers, led to a reduction in the number of identified individual criteria from 46 to 15. The following exclusion criteria informed the process of exclusion: strong overlap with items of healthevidence.org ( n  = 11; i.e. similar wording), limited relevance for quality of review findings ( n  = 16; e.g. “Were directions for future research proposed?” [ 36 ]), and limited potential for replicable assessments ( n  = 4; e.g. “Date of review – is it likely to be out of date?” [ 35 ]). The overall aim was to identify items which were comprehensive, relevant and objectively appraisable. Given some overlap between individual criteria in the set of extracted criteria, a factor analysis, as performed by developers of the original AMSTAR tool [ 34 ], was not undertaken. Instead, we extended some criteria with objectively appraisable content during further internal revisions of the tool (see Manual; coding boxes).

First draft of CAT HPPR and guidance development

We also used reporting guidelines/standards as well as guidance documents for reviews for setting basic requirements for each criterion to be fulfilled by a review and developed further guidance for reaching a judgement by a user. A global rating system to combine information gained from all 15 criteria was introduced.

Piloting and refinement

Finally, after piloting a first version of the CAT HPPR with 14 reviews, feedback and requests for further clarification by intended users of the tool’s assessment and experts of the project-specific reviewer pool led to final adjustments of the tool [ 40 ]. Feedback and requests were based on completed assessments among all major review types CAT HPPR was originally designed for (SR: n  = 2, RR: n  = 2, ScR: n  = 10). As a result, a review-type specific algorithm was introduced in the global rating system in order to better take methodological advantages and disadvantages of individual review types into account. Among other things, the “Risk of Bias Assessment” was thus highlighted as a basic requirement and quality feature in systematic reviews compared to other review types. Informal feedback of CAT HPPR users was requested at the end of the piloting stage regarding processing time (not actually timed) and overall satisfaction with scope and applicability of CAT HPPR and its guidance documents.

Table  1 provides all 15 questions (criteria) used in the novel CAT HPPR, whereas minimal requirements to obtain a positive rating are further defined in the manual and assessment form appended to this article.

The new CAT HPPR was primarily influenced by the healthevidence.org CAT (i.e. items, plain language style) [ 14 ] and AMSTAR 2 (i.e. global rating process based on critical criteria) [ 15 ], which were originally designed to be used with systematic reviews exclusively. A major challenge remained to adapt the basic concept behind seven of the original healthevidence.org CAT items [ 14 ], six unique items of AMSTAR 2 [ 15 ], one item of AQASR [ 36 ] and one item of SURE [ 35 ] to be also applicable to other review types, namely rapid and scoping reviews, and complementary review approaches (as review of reviews, with mixed-methods-approach, with meta-analysis). Definitions for review types and complementary review approaches applicable to CAT HPPR can be found in appendix 1 of the tool’s manual. We used working definitions based on reporting guidelines and review specific methodological research to further narrow down minimal requirements for reaching a judgement.

A full rationale for inclusion of each of the 15 criteria is provided in the manual. To briefly summarise, C1 aims to assess whether review authors were able to provide an adequately formulated review question [ 8 ]. In addition to PICO(−TSSD), scoping reviews can be also assessed based on the PCC (population, concept and context) question format [ 5 ]. The gold-standard of whether the review was based on a protocol which details in advance the review’s rationale, objective and methods is subject of C2 [ 8 , 56 ]. A review should have clearly reported in its methods section eligibility criteria by which the included evidence sources were selected and non-relevant sources were excluded to make results plausible and reproduceable (C3) [ 8 , 9 ]. A well-documented and comprehensive search strategy includes search approaches for multiple literature databases and other search streams to identify relevant evidence sources, this can particular differ between systematic and rapid reviews which can be assessed with C4 [ 57 ]. Full reporting on the selection process is subject of C5 [ 8 ]. The description of characteristics of included evidence sources can be part of the results (e.g. in scoping reviews) as well as an intermediate step informing the synthesis of a review [ 5 , 8 ]. C6 asks whether these characteristics are sufficiently reported in a review. Depending on the included evidence sources and the research question of interest, different approaches are available for synthesising the results (or data) in a review. Review authors should at least report a well-balanced narrative synthesis considering all included evidence sources (presented in tables and/or text) for a positive rating of C7 [ 5 ]. C8 investigates whether the interpretation by the review authors was in line with data of the included evidence sources. Involvement of at least two people in relevant review tasks (e.g. selection process) can help to avoid errors, biased decisions and, thereby, contribute to the overall decision quality which is assessed for a review at C9 [ 6 ]. Assessing potential bias of the included evidence by conducting a quality assessment can be considered as a review result on its own and helps to further understand the certainty of the evidence [ 6 ]; minimal requirements towards a quality assessment are outlined at C10. Strengths and weaknesses of the synthesised evidence can be based on this quality assessment and should be identifiable in the interpretation of the review findings (C11). Assessments on whether statistical tests and/or narrative reporting on homogeneity or heterogeneity of included evidence sources (which can guide selection of a synthesis method and clarify whether in- and exclusion criteria were thoroughly followed) were conducted are subject of C12 [ 15 ]. A critical reflection regarding limitations and decisions made in the review process is necessary to assess the uncertainty of the overall review results and to identify aspects of what future research should investigate. Limitations may stem from external factors (e.g. funding of the review project) and the context of the research, but, nevertheless, should be transparently reported (C13) [ 8 ]. Conflicts of interest affecting a review team (authors) can sometimes, not automatically, lead to biased decisions in conduct of a review, which can ultimately translate into biased review results and should be checked at C14 [ 15 ]. Finally, review authors should remain as objective as possible whilst taking both positive and negative aspects as well as unintended or adverse effects of interventions or exposures into account [ 46 ]. An unbalanced or no representation in the reporting of all relevant outcomes will lead to a negative rating of C15.

Using the CAT HPPR

Each assessment starts with an assignment for or definition of the review type (and, if applicable, the complementary review approach based on this type) which forms the basis for the entire appraisal process to come (see Manual). This is a basic requirement so that each criterion and question can be rated or answered in accordance to the methodological requirements for a particular review type and, if applicable, approach (Fig.  1 ).

figure 1

Critical appraisal process using CAT HPPR

The CAT HPPR dictionary of the manual provides guidance on how to reach a judgement for a specific criterion. The information in the text box in particular serves as a guideline for reaching the final judgement. In addition to the obligatory consideration of “hard” criteria (i.e. information for reaching a judgement of YES or NO), further information, provided in the introduction to each criterion, serves as general orientation as to which factors may also influence the rating [ 14 ] (Fig.  2 ).

figure 2

Example of a “Coding box” for C1

Differences in ratings for systematic reviews, scoping reviews and rapid reviews

Not only do minimum requirements for individual criteria in CAT HPPR differ by design across assessments for different review types, but also the global rating is affected (Fig.  3 ).

figure 3

Example of the assessment form for C1

Rapid reviews usually feature a reduced scope of the research question, a reduction in number of databases searched and use of search limiters (e.g. time period covered, language restrictions), the omission of the four-eyes principle (i.e. two authors independently involved to screen, extract or assess data in conduct of the review) for most review steps and narrative presentation of the results without a meta-analysis [ 41 , 45 , 47 , 54 , 58 ]. Scoping reviews tend not to provide standardised effect estimates, a quality assessment of individual evidence sources and usually omit further quantitative analyses such as sensitivity and subgroup analyses [ 9 , 49 ]. Therefore, the global rating of CAT HPPR for a review depends on the assigned type of review (and, if applicable, the complementary review approach), the research question (C1) and the kind of data of included evidence sources extracted and used for the review (see C10). End-users need to first select the correct algorithm that applies to the appraised review and follow the instructions for each individual criterion regarding eligibility in the global rating process. The biggest difference between each review type in the global rating process lies in the fact that C9 and C10 are in some cases treated as “critical criteria” or are sometimes not treated as such (see Table 1 ). In stark contrast to a systematic review, some methodological aspects (four-eyes principle, quality assessment of included evidence sources) are often not defined as minimum requirements by (reporting) guidelines for reporting a rapid review or a scoping review (especially RRs), or in practice are simply not taken into consideration by review authors (especially ScRs) [ 9 , 41 , 45 , 47 , 49 , 54 , 55 , 58 ].

Shared experience made by CAT HPPR users

Based on user experience with CAT HPPR during the pilot phase, the processing time for an appraisal has been described as “manageable” and comparable to other CATs for systematic reviews [ 15 ]. Users needed approx. thirty minutes for a complete assessment run of a review manuscript with 40 pages, after they got familiar with the tool. End-users reported that reading the dictionary and all definitions of the manual for the first time, exceeded processing time of a review assessment and should be taken into consideration for introducing the tool to new users. All 14 review assessments during the pilot phase were successfully assessed with CAT HPPR, including final global ratings ranging from “very low” to “high” (with “moderate” being the least often reported global rating). This showcased the tool’s ability to differentiate between reviews of different quality: the tool ranked reviews with high reporting quality and appropriate use of standard review methods better than other reviews, which was later confirmed by oral feedback of end-users of the reports not directly involved in the CAT development process.

The development process of CAT HPPR led to a manual and appraisal form which can be now used by academic researchers, students and practitioners in health promotion and prevention to asses a variety of different review types and analytic designs. Existing CATs, especially AMSTAR 2 and ROBIS, which are well suited for assessing systematic reviews with intervention type studies and meta-analysis, remain relevant in academia to this day [ 15 , 38 ] — developing CAT HPPR was not intended to replace them. In the case of healthevidence.org CAT, the tool has proven that ratings which are later added as supplementary information to a publicly available review database increase confidence in the evidence provided [ 14 ]. CAT HPPR fills the gap in case reviews of different types need to be assessed at the same time and introduces a strategy to decompose complexity by having different algorithms for achieving an informative, meaningful and yet nuanced global rating. A key strength of this work is that review types such as rapid and scoping reviews as well as mixed-methods reviews and review of reviews – so far overlooked by CAT developers – can be now assessed using a transparent method [ 13 , 14 , 15 , 20 , 24 , 28 , 29 , 30 , 33 , 34 , 35 , 36 , 37 , 38 ]. Prior to the development of CAT HPPR quality assessments of these review types had to rely on CATs originally designed for systematic reviews where many items do not capture the full breadth of other review types (e.g. as review of reviews: reviews and not primary studies as documents to be included; with mixed-methods-approach: integration of quantitative and qualitative data). We tried to overcome this challenge by using new reporting standards (e.g. PRISMA-ScR, guidelines by JBI) and our own working definitions for reviews to inform CAT HPPR [ 1 , 2 , 9 ]. Working in close partnership with experts in the field of meta research led to adjustments of CAT HPPR to make it more accessible for persons with no prior in-depth knowledge on reviews. The development process of CAT HPPR was not without limitations, given the short development time and opportunistic approach we took. In particular, we had hoped to test inter-rater reliability using κ scores for agreement between pairs of raters for assessing reviews of different formats which should be tested in future research [ 15 ]. The tool was also piloted with a limited set of commissioned reviews and the identification strategy for reporting guidelines, CATs, and items could have been improved by conducting a scoping review at the beginning, including a broader search. And despite using recent reporting guidelines and research to inform working definitions for review types and complementary review approaches, definitions of review types might still be subject of ongoing scientific discussion and also change over time. We tried to partly overcome this issue by letting the assessor decide, based on our guidance provided in the manual, that the assignment to a review type and complementary review approach does not necessarily have to correspond to the original label used by review authors to describe their own work (e.g. (i) authors’ label for review: “overview of reviews”; label used for the CAT HPPR assessment: “systematic review” “as review of reviews”). Lastly, for interpreting the global rating of a rapid review in particular, another factor should be cautiously factored in regarding the certainty of the evidence: using abridged review methods can by default result in the exclusion and/or non-consideration of relevant evidence in comparison to a systematic review. This similarly applies to scoping reviews, which sometimes are exclusively used to generate working definitions and to investigate boundaries of a research topic [ 5 , 9 ]. Again, therefore, comparisons between different review types based on the global rating should be avoided (see Manual).

The CAT HPPR can inform research and evidence based public health practice that aims to assess the quality of different review formats and analytic approaches. The tool pursues the dual aim of integrating existing knowledge of established CATs for systematic reviews (i.e. AMSTAR 2, healthevidence.org [ 14 , 15 ]) with reporting standards as well as author guidance of emerging review formats (scoping reviews, rapid reviews) which became recently available [ 2 , 5 , 7 , 8 , 9 , 41 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 ]. To our knowledge, no similar CAT has been published to date that takes this unified approach allowing to assess reviews which include other “evidence sources” than traditional experimental clinical trials (e.g. RCTs). The available manual provides in-depth guidance on how to make objective assessments leading to a global rating across different review types. Criteria and definitions reported in the CAT HPPR manual might also help authors to improve their own work, by being aware of differences and standards for different review types and complementary approaches. We acknowledge one minor limitation. During development, piloting was conducted with an earlier version of the tool, therefore, inter-rater agreement should be investigated in future research. Nevertheless, we anticipate that CAT HPPR, including developed review definitions, will inform ongoing practice for EBPH- and implementation-oriented individuals and organisations. In particular, we stress the importance of using review evidence with high levels of transparency as well as methodological sound reporting and content. We call on researchers and practitioners alike to work with the CAT HPPR and welcome feedback for further development.

Availability of data and materials

All documents for using the CAT HPPR are appended as online supplementary materials or directly included in this article as figures, tables or text. Additional data and documents used in development stages of the tool, including the research protocol, are available upon reasonable request. For this reason, please contact the corresponding author (Thomas L Heise).

Abbreviations

A MeaSurement Tool to Assess systematic Reviews

Revision of AMSTAR

Assessing the Quality and Applicability of Systematic Reviews

Critical Appraisal Tool

Critical Appraisal Tool for Health Promotion and Prevention Reviews

Enhancing the QUAlity and Transparency Of health Research

Gesetzliche Krankenversicherung (German Statutory Health Insurance Funds)

Joanna Briggs Institute

Medical Literature Analysis and Retrieval System Online (U.S. National Library of Medicine)

Methodological Expectations of Cochrane Intervention Reviews

Population, Concept and Context

Population/patient/problem; Intervention, strategy or phenomenon of interest; Comparator; Outcomes, results of interest; Timing of outcome/follow-up measurement/assessment; Setting; Study-Design

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

PRISMA Extension for Scoping Reviews

Randomized controlled trial

Risk of Bias

Rapid Review

Scoping Review

Systematic Review

Specialist Unit for Review Evidence

Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097.

Article   Google Scholar  

Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2015;349:g7647.

Theobald S, Brandes N, Gyapong M, El-Saharty S, Proctor E, Diaz T, et al. Implementation research: new imperatives and opportunities in global health. Lancet. 2018;392(10160):2214–28.

Rychetnik L, Wise M. Advocating evidence-based health promotion: reflections and a way forward. Health Promot Int. 2004;19(2):247–57.

Joanna Briggs Institute Reviewer’s Manual https://reviewersmanual.joannabriggs.org/ Accessed 24 July 2019.

Methodological Expectations of Cochrane Intervention Reviews https://community.cochrane.org/mecir-manual/ Accessed 24 July 2019.

Developing NICE Guidelines: The Manual https://www.nice.org.uk/process/pmg20/chapter/introduction-and-overview Accessed 24 July 2019.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71.

Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Ann Intern Med. 2018;169(7):467–73.

Enhancing the QUAlity and Transparency Of health Research (EQUATOR-Network) https://www.equator-network.org/ Accessed 24 July 2019.

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Article   CAS   Google Scholar  

Oxman AD, Cook DJ, Guyatt GH, Bass E, Brill-Edwards P, Browman G, et al. Users’ guides to the medical literature: VI. How to Use an Overview. JAMA. 1994;272(17):1367–71.

Oxman AD, Guyatt GH. Validation of an index of the quality of review articles. J Clin Epidemiol. 1991;44(11):1271–8.

Healthevidence.org : quality assessment tool - Review Articles https://www.healthevidence.org/documents/our-appraisal-tools/QA_Tool&Dictionary_10Nov16.pdf Accessed 24 July 2019.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ. 2017;358:j4008.

Evidence analysis manual - appendix 10: quality criteria Checklist: Review Article https://www.andeal.org/vault/2440/web/files/2016_April_EA_Manual.pdf Accessed 24 July 2019.

Reviews and Meta-Analyses Checklist https://bestbets.org/ca/pdf/review.pdf Accessed 24 July 2019.

Critical Appraisal of a Meta-analysis or Systematic Review https://www.cebma.org/wp-content/uploads/Critical-Appraisal-Questions-for-a-SR-or-MA-july-2014.pdf Accessed 24 July 2019.

Critical appraisal worksheets: Systematic Reviews https://www.cebm.net/systematic-review-4/ Accessed 24 July 2019.

CASP Checklist: 10 questions to help you make sense of a Systematic Review https://casp-uk.net/wp-content/uploads/2018/01/CASP-Systematic-Review-Checklist_2018.pdf Accessed 24 July 2019.

Critical Appraisal Checklist for a Systematic Review https://www.gla.ac.uk/media/media_64047_en.pdf Accessed 24 July 2019.

Diekemper RL, Ireland BK, Merz LR. Development of the documentation and appraisal review tool for systematic reviews. World J Meta-Anal. 2015;3(3):142–50.

Assessing the Credibility of the Systematic Review Process https://guides.mclibrary.duke.edu/ld.php?content_id=27688601 Accessed 24 July 2019.

Tools for critically appraising different study designs, systematic review and literature searches https://doi.org/10.2903/sp.efsa.2015.EN-836 Accessed 24 July 2019.

Glenny AM, Esposito M, Coulthard P, Worthington HV. The assessment of systematic reviews in dentistry. Eur J Oral Sci. 2003;111(2):85–92.

Systematic review (of therapy) worksheet https://ebm-tools.knowledgetranslation.net/themes/blue/files/uploads/sr-worksheet.doc Accessed 24 July 2019.

Kung J, Chiappelli F, Cajulis OO, Avezova R, Kossan G, Chew L, et al. From systematic reviews to clinical recommendations for evidence-based health care: validation of revised assessment of multiple systematic reviews (R-AMSTAR) for grading of clinical relevance. Open Dent J. 2010;4:84–91.

Google Scholar  

Quality Assessment of Systematic Reviews and Meta-Analyses https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools Accessed 24 July 2019.

The social care guidance manual - Appendix B Methodology checklist: systematic reviews and meta-analyses https://www.nice.org.uk/process/pmg10/chapter/appendix-b-methodology-checklist-systematic-reviews-and-meta-analyses Accessed 24 July 2019.

Infection Prevention and Control Guidelines: Critical Appraisal Tool Kit http://publications.gc.ca/collections/collection_2014/aspc-phac/HP40-119-2014-eng.pdf Accessed 24 July 2019.

Meta-tool for quality appraisal of public health evidence - PHO MetaQAT 1.0 https://www.publichealthontario.ca/en/ServicesAndTools/CriticalAppraisalTool/PHO_MetaQAT_2015.pdf Accessed 24 July 2019.

Sacks HS, Berrier J, Reitman D, Ancona-Berk VA, Chalmers TC. Meta-analyses of randomized controlled trials. N Engl J Med. 1987;316(8):450–5.

Critical appraisal notes and checklists - Methodology Checklist 1: Systematic Reviews and Meta-analyses https://www.sign.ac.uk/checklists-and-notes.html Accessed 24 July 2019.

Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, et al. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7:10.

Questions to assist with the critical appraisal of a systematic review https://www.cardiff.ac.uk/__data/assets/pdf_file/0007/1142962/SURE-CA-form-for-SR_2018.pdf Accessed 24 July 2019.

Assessing the quality and applicability of systematic reviews (AQASR) https://ktdrr.org/ktlibrary/articles_pubs/ncddrwork/aqasr/ Accessed 24 July 2019.

Critical appraisal tools for use in JBI systematic reviews: Checklist for Systematic Reviews and Research Syntheses http://joannabriggs.org/research/critical-appraisal-tools.html Accessed 24 July 2019.

Whiting P, Savović J, Higgins JPT, Caldwell DM, Reeves BC, Shea B, et al. Group R: ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225–34.

Auswahl- und Bewertungskriterien für die CTC Programm-Datenbank https://www.gruene-liste-praevention.de/communities-that-care/Media/_Grne_Liste_Kriterien.pdf Accessed 24 July 2019.

Alayli AFG, Witte C, Haß W, Zeeb H, Heise TL, Hupfeld J. [Insights for healthy settings: a database to support the translation of findings from systematic reviews into practice] Wissen für gesunde Lebenswelten: Eine Datenbank zum Praxistransfer von Erkenntnissen aus systematischen Übersichtsarbeiten. Bundesgesundheitsbl Gesundheitsforsch Gesundheitsschutz. 2021;64(5):552–9.

Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Inf Libr J. 2009;26(2):91–108.

Whiting P, Wolff R, Mallett S, Simera I, Savović J. A proposed framework for developing quality assessment tools. Syst Rev. 2017;6(1):204.

Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Harker J, Kleijnen J. What is a rapid review? A methodological exploration of rapid reviews in health technology assessments. Int J Evid Based Healthc. 2012;10(4):397–410.

EPC Methods: An Exploration of Methods and Context for the Production of Rapid Reviews https://www.ncbi.nlm.nih.gov/books/NBK274092/pdf/Bookshelf_NBK274092.pdf Accessed 24 July 2019.

Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011] https://training.cochrane.org/handbook Accessed 24 July 2019.

Khangura S, Polisena J, Clifford TJ, Farrah K, Kamel C. Rapid review: an emerging approach to evidence synthesis in health technology assessment. Int J Technol Assess Health Care. 2014;30(1):20–7.

Levac D, Colquhoun H, O'Brien KK. Scoping studies: advancing the methodology. Implement Sci. 2010;5:69.

Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18(1):143.

Pearson A, White H, Bath-Hextall F, Salmond S, Apostolo J, Kirkpatrick P. A mixed-methods approach to systematic reviews. Int J Evid Based Healthc. 2015;13(3):121–31.

Peters MDJ, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB. Guidance for conducting systematic scoping reviews. Int J Evid Based Healthc. 2015;13(3):141–6.

Pham MT, Rajic A, Greig JD, Sargeant JM, Papadopoulos A, McEwen SA. A scoping review of scoping reviews: advancing the approach and enhancing the consistency. Res Synth Methods. 2014;5(4):371–85.

Smith V, Devane D, Begley CM, Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. BMC Med Res Methodol. 2011;11(1):15.

Rapid reviews to strengthen health policy and systems: a practical guide https://apps.who.int/iris/bitstream/handle/10665/258698/9789241512763-eng.pdf Accessed 29 July 2019.

Tricco AC, Lillie E, Zarin W, O'Brien K, Colquhoun H, Kastner M, et al. A scoping review on the conduct and reporting of scoping reviews. BMC Med Res Methodol. 2016;16:15.

Pieper D, Rombey T. Where to prospectively register a systematic review. Syst Rev. 2022;11(1):8.

Nussbaumer-Streit B, Klerings I, Wagner G, Heise TL, Dobrescu AI, Armijo-Olivo S, et al. Abbreviated literature searches were viable alternatives to comprehensive searches: a meta-epidemiological study. J Clin Epidemiol. 2018;102:1–11.

Tricco AC, Antony J, Zarin W, Strifler L, Ghassemi M, Ivory J, et al. A scoping review of rapid review methods. BMC Med. 2015;13:224.

Download references

Acknowledgements

We would like to show our gratitude to members of the project-specific reviewer pool (Prof. Dr. Stefan K. Lhachimi, Prof. Dr. Dawid Pieper and Dr. Manuela Pfinder) for sharing feedback on an earlier version of the tool with us during the course of a workshop organised by the BZgA. Valuable feedback was also provided by Christine Witte and Dr. Aaron Kreimer. The authors would like to thank Kirsty Cameron for constructive criticism of the CAT HPPR manual. Finally, we would like to thank two anonymous peer-reviewers and the editor for taking the time to thoughtfully review our manuscript. We appreciated all valuable comments and suggestions, which led to an improved version of this manuscript.

Open Access funding enabled and organized by Projekt DEAL. The development of CAT HPPR was funded by the “GKV-Bündnis für Gesundheit”, a joint initiative of all statutory health insurance funds for developing and implementing setting-based health promotion and prevention measures. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding organisations.

Author information

Authors and affiliations.

Leibniz Institute for Prevention Research and Epidemiology—BIPS, Bremen, Germany

Thomas L. Heise & Hajo Zeeb

Health Sciences Bremen, University of Bremen, Bremen, Germany

Institute and Policlinic of Occupational and Social Medicine, Faculty of Medicine, Technische Universität Dresden, Dresden, Germany

Andreas Seidler, Maria Girbig & Alice Freiberg

Unit of Health Services Research, Clinic of General Pediatrics, Neonatology and Pediatric Cardiology, University Hospital Düsseldorf, Medical Faculty, Heinrich-Heine-University Düsseldorf, Düsseldorf, Germany

Adrienne Alayli

Federal Centre for Health Education—BZgA, Cologne, Germany

Maria Fischer & Wolfgang Haß

You can also search for this author in PubMed   Google Scholar

Contributions

WH, HZ, and TLH set the scope for developing CAT HPPR. TLH, HZ identified reporting guidelines, relevant CATs, extracted data, developed assessment criteria, the global rating process and drafted a first version of CAT HPPR. AS, MG, AF, AA, MF, WH provided feedback to each section of the tool at various development stages. The tool was piloted by AF, MG, TLH, HZ using reviews of the “GKV-Bündnis für Gesundheit” database. TLH led manuscript development and contributed substantially to each manuscript section. All authors (TLH, HZ, AS, MG, AF, AA, MF, WH) contributed to manuscript editing, reviewed several iterations before the final manuscript version was approved and submitted by the author team.

Corresponding author

Correspondence to Thomas L. Heise .

Ethics declarations

Ethics approval and consent to participate.

Not applicable. No data of humans were used or analysed in this research.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1., additional file 2., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Heise, T.L., Seidler, A., Girbig, M. et al. CAT HPPR: a critical appraisal tool to assess the quality of systematic, rapid, and scoping reviews investigating interventions in health promotion and prevention. BMC Med Res Methodol 22 , 334 (2022). https://doi.org/10.1186/s12874-022-01821-4

Download citation

Received : 29 May 2022

Accepted : 14 December 2022

Published : 26 December 2022

DOI : https://doi.org/10.1186/s12874-022-01821-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Critical appraisal tool
  • Evidence synthesis
  • Systematic review
  • Rapid review
  • Scoping review
  • Review of reviews
  • Mixed-methods
  • Meta-analysis
  • Health promotion

BMC Medical Research Methodology

ISSN: 1471-2288

critical appraisal tools for research articles

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals

You are here

  • Volume 27, Issue Suppl 2
  • 12 Critical appraisal tools for qualitative research – towards ‘fit for purpose’
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Veronika Williams 1 ,
  • Anne-Marie Boylan 2 ,
  • Newhouse Nikki 2 ,
  • David Nunan 2
  • 1 Nipissing University, North Bay, Canada
  • 2 University of Oxford, Oxford, UK

Qualitative research has an important place within evidence-based health care (EBHC), contributing to policy on patient safety and quality of care, supporting understanding of the impact of chronic illness, and explaining contextual factors surrounding the implementation of interventions. However, the question of whether, when and how to critically appraise qualitative research persists. Whilst there is consensus that we cannot - and should not – simplistically adopt existing approaches for appraising quantitative methods, it is nonetheless crucial that we develop a better understanding of how to subject qualitative evidence to robust and systematic scrutiny in order to assess its trustworthiness and credibility. Currently, most appraisal methods and tools for qualitative health research use one of two approaches: checklists or frameworks. We have previously outlined the specific issues with these approaches (Williams et al 2019). A fundamental challenge still to be addressed, however, is the lack of differentiation between different methodological approaches when appraising qualitative health research. We do this routinely when appraising quantitative research: we have specific checklists and tools to appraise randomised controlled trials, diagnostic studies, observational studies and so on. Current checklists for qualitative research typically treat the entire paradigm as a single design (illustrated by titles of tools such as ‘CASP Qualitative Checklist’, ‘JBI checklist for qualitative research’) and frameworks tend to require substantial understanding of a given methodological approach without providing guidance on how they should be applied. Given the fundamental differences in the aims and outcomes of different methodologies, such as ethnography, grounded theory, and phenomenological approaches, as well as specific aspects of the research process, such as sampling, data collection and analysis, we cannot treat qualitative research as a single approach. Rather, we must strive to recognise core commonalities relating to rigour, but considering key methodological differences. We have argued for a reconsideration of current approaches to the systematic appraisal of qualitative health research (Williams et al 2021), and propose the development of a tool or tools that allow differentiated evaluations of multiple methodological approaches rather than continuing to treat qualitative health research as a single, unified method. Here we propose a workshop for researchers interested in the appraisal of qualitative health research and invite them to develop an initial consensus regarding core aspects of a new appraisal tool that differentiates between the different qualitative research methodologies and thus provides a ‘fit for purpose’ tool, for both, educators and clinicians.

https://doi.org/10.1136/ebm-2022-EBMLive.36

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Read the full text or download the PDF: