Evidence-Based Research

 

Means, S. N., Magura, S., Burkhardt, J. T., Schröter, D. C., & Coryn, C. L. S. (2015). Comparing rating paradigms for evidence-based program registers in behavioral health: Evidentiary criteria and implications for assessing programs. Evaluation and Program Planning, 48, 100-116. doi:https://doi.org/10.1016/j.evalprogplan.2014.09.007

Abstract: Decision makers need timely and credible information about the effectiveness of behavioral health interventions. Online evidence-based program registers (EBPRs) have been developed to address this need. However, the methods by which these registers determine programs and practices as being “evidence-based” has not been investigated in detail. This paper examines the evidentiary criteria EBPRs use to rate programs and the implications for how different registers rate the same programs. Although the registers tend to employ a standard Campbellian hierarchy of evidence to assess evaluation results, there is also considerable disagreement among the registers about what constitutes an adequate research design and sufficient data for designating a program as evidence-based. Additionally, differences exist in how registers report findings of “no effect,” which may deprive users of important information. Of all programs on the 15 registers that rate individual programs, 79% appear on only one register. Among a random sample of 100 programs rated by more than one register, 42% were inconsistently rated by the multiple registers to some degree.

Archibald, T. (2015). “They Just Know”: The epistemological politics of “evidence-based” non-formal education. Evaluation and Program Planning, 48, 137-148. doi:https://doi.org/10.1016/j.evalprogplan.2014.08.001

Abstract: Community education and outreach programs should be evidence-based. This dictum seems at once warranted, welcome, and slightly platitudinous. However, the “evidence-based” movement's more narrow definition of evidence—privileging randomized controlled trials as the “gold standard”—has fomented much debate. Such debate, though insightful, often lacks grounding in actual practice. To address that lack, the purpose of the study presented in this paper was to examine what actually happens, in practice, when people support the implementation of evidence-based programs (EBPs) or engage in related efforts to make non-formal education more “evidence-based.” Focusing on three cases—two adolescent sexual health projects (one in the United States and one in Kenya) and one more general youth development organization—I used qualitative methods to address the questions: (1) How is evidence-based program and evidence-based practice work actually practiced? (2) What perspectives and assumptions about what non-formal education is are manifested through that work? and (3) What conflicts and tensions emerge through that work related to those perspectives and assumptions? Informed by theoretical perspectives on the intersection of science, expertise, and democracy, I conclude that the current dominant approach to making non-formal education more evidence-based by way of EBPs is seriously flawed.

Jobli, E. C., Gardner, S. E., Hodgson, A. B., & Essex, A. (2015). The review of new evidence 5 years later: SAMHSA's National Registry of Evidence-based Programs and Practices (NREPP). Evaluation and Program Planning, 48, 117-123. doi:https://doi.org/10.1016/j.evalprogplan.2014.08.005

Abstract: The Substance Abuse and Mental Health Services Administration (SAMHSA) decided that NREPP should offer a second review option for interventions that have already been reviewed and included in the registry for 5 years. Principals from 135 such interventions were invited to participate in a second review, and an exploratory study of the Principals’ responses to this invitation was conducted. The study used a mixed-method approach, quantitatively describing characteristics of Principals and their interventions and qualitatively summarizing feedback from phone interviews with a convenience sample of Principals participating in a second review. Of the Principals invited, 21% accepted a second review, 24% were interested but unable or not ready to submit materials, and 56% did not accept or did not respond. Mental health treatment interventions were more likely to undergo a second review, and substance abuse treatment interventions were less likely. Similar percentages of interventions undergoing a second review had received funding from the National Institutes of Health (86%) and had been evaluated in a comparative effectiveness research study (79%). Overall ratings for interventions improved in each second review completed. The interviewed Principals perceived potentially lower ratings as the only risk in participating in a second review.

Burkhardt, J. T., Schröter, D. C., Magura, S., Means, S. N., & Coryn, C. L. S. (2015). An overview of evidence-based program registers (EBPRs) for behavioral health. Evaluation and Program Planning, 48, 92-99. doi:https://doi.org/10.1016/j.evalprogplan.2014.09.006

Abstract: Evaluations of behavioral health interventions have identified many that are potentially effective. However, clinicians and other decision makers typically lack the time and ability to effectively search and synthesize the relevant research literature. In response to this opportunity, and to increasing policy and funding pressures for the use of evidence-based practices, a number of “what works” websites have emerged to assist decision makers in selecting interventions with the highest probability of benefit. However, these registers as a whole are not well understood. This article, which represents phase one of a concurrent mixed methods study, presents a review of the scopes, structures, dissemination strategies, uses, and challenges faced by evidence-based registers in the behavioral health disciplines. The major findings of this study show that in general, registers of evidence-based practices are able, to a degree, to identify the most effective practices meet this need to a degree. However, much needs to be done to improve the ability of the registers to fully realize their purpose.

Gugiu, P. C. (2015). Hierarchy of evidence and appraisal of limitations (HEAL) grading system. Evaluation and Program Planning, 48, 149-159. doi:https://doi.org/10.1016/j.evalprogplan.2014.08.003

Abstract: Despite more than 30 years of effort that has been dedicated to the improvement of grading systems for evaluating the quality of research study designs considerable shortcomings continue. These shortcomings include the failure to define key terms, provide a comprehensive list of design flaws, demonstrate the reliability of such grading systems, properly value non-randomized controlled trials, and develop theoretically-derived systems for penalizing and promoting the evidence generated by a study. Consequently, in light of the importance of grading guidelines in evidence-based medicine, steps must be taken to remedy these deficiencies. This article presents two methods – a grading system and a measure of methodological bias – for evaluating the quality of evidence produced by an efficacy study.

Mihalic, S. F., & Elliott, D. S. (2015). Evidence-based programs registry: Blueprints for Healthy Youth Development. Evaluation and Program Planning, 48, 124-131. doi:https://doi.org/10.1016/j.evalprogplan.2014.08.004

Abstract: There is a growing demand for evidence-based programs to promote healthy youth development, but this growth has been accompanied by confusion related to varying definitions of evidence-based and mixed messages regarding which programs can claim this designation. The registries that identify evidence-based programs, while intended to help users sift through the findings and claims regarding programs, has oftentimes led to more confusion with their differing standards and program ratings. The advantages of using evidence-based programs and the importance of adopting a high standard of evidence, especially when taking programs to scale,are described. One evidence-based registry is highlighted—Blueprints for Healthy Youth Development hosted at the University of Colorado Boulder.Unlike any previous initiative of its kind, Blueprintsestablished unmatched standards for identifying evidence-based programs and has acted in a way similar to the FDA – evaluating evidence, data and research to determine which programs meet their high standard of proven efficacy.

Schröter, D. C., Magura, S., & Coryn, C. (2015). Deconstructing evidence-based practice: Progress and ambiguities. Evaluation and Program Planning, 48, 90-91. doi:https://doi.org/10.1016/j.evalprogplan.2014.10.001

Claes, C., van Loon, J., Vandevelde, S., & Schalock, R. (2015). An integrative approach to evidence based practices. Evaluation and Program Planning, 48, 132-136. doi:https://doi.org/10.1016/j.evalprogplan.2014.08.002

© 2019 by NPSC.

Questions/comments, contact: Jbair@c-trans.org