Assessing quality in systematic reviews of the effectiveness of health promotion: areas of consensus and dissension

Systematic reviews have played an increasingly important role in health promotion in recentyears. Yet there are debates about how they should be conducted, particularly about how thequality of evidence should be assessed. The aim of this research was to assess currentapproaches to, and general views...

Full description

Saved in:  
Bibliographic Details
Main Author: Shepherd, Jonathan Paul (Author)
Format: Electronic Book
Language:English
Published: 2009
In:Year: 2009
Check availability: HBZ Gateway
Subito Delivery Service: Order now.
Keywords:
Description
Summary:Systematic reviews have played an increasingly important role in health promotion in recentyears. Yet there are debates about how they should be conducted, particularly about how thequality of evidence should be assessed. The aim of this research was to assess currentapproaches to, and general views on, the use of quality assessment in systematic reviews ofeffectiveness in health promotion, and to identify areas of consensus and dissension around thechoice of techniques, methods and criteria employed.There were two stages of data collection. The first was a structured mapping of a randomsample of 30 systematic reviews of the effectiveness of health promotion to identify and explaintrends and themes in methods and approaches to quality assessment. During the second stagesemi-structured interviews were conducted with a purposive sample of 17 systematic reviewerswho had conducted at least one review of a health promotion topic, to investigate some of thesetrends and approaches in greater detail.The mapping found that the majority of systematic reviews had assessed the quality of theincluded studies, to varying degrees. However, procedures were not always explicitly reportedor consistent. There was some degree of consensus over criteria, with experimental evaluationmethods commonly favoured. Most frequently used quality assessment criteria includedparticipant attrition, the validity and reliability of data collection and analysis methods, andadequacy of sample sizes. External validity was commonly assessed, primarily in terms ofgeneralisability and replicability, but less so in terms of intervention quality.The interviews revealed some of the barriers to effective systematic reviewing, including: lackof time and resources, complexity of some health promotion interventions, inclusion ofobservational evaluation designs, and poor reporting of primary studies. Systematic reviewingwas commonly done in small teams, mostly comprising academics, sometimes withpractitioners. Interviewees learned systematic review skills through a combination of training,support from colleagues and mentors, literature and a strong emphasis on hands-on practicallearning. Subjective judgement was often required, contra to the popular belief that systematicreviews are wholly objective.The overall conclusions of this study are that systematic reviewing in health promotion is oftenchallenging due the complexity of interventions and evaluation designs. This places additionaldemands on reviewers in terms of knowledge and skills required, often exacerbated by finitetime scales and limited funding. Initiatives are in place to foster shared ways of working,although the extent to which complete consensus is achievable in a multi- disciplinary area suchas health promotion is questionable