Warning: file_get_contents(https://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=26537706
&cmd=llinks): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 215
Outcomes for implementation science: an enhanced systematic review of instruments
using evidence-based rating criteria
#MMPMID26537706
Lewis CC
; Fischer S
; Weiner BJ
; Stanick C
; Kim M
; Martinez RG
Implement Sci
2015[Nov]; 10
(?): 155
PMID26537706
show ga
BACKGROUND: High-quality measurement is critical to advancing knowledge in any
field. New fields, such as implementation science, are often beset with
measurement gaps and poor quality instruments, a weakness that can be more easily
addressed in light of systematic review findings. Although several reviews of
quantitative instruments used in implementation science have been published, no
studies have focused on instruments that measure implementation outcomes. Proctor
and colleagues established a core set of implementation outcomes including:
acceptability, adoption, appropriateness, cost, feasibility, fidelity,
penetration, sustainability (Adm Policy Ment Health Ment Health Serv Res
36:24-34, 2009). The Society for Implementation Research Collaboration (SIRC)
Instrument Review Project employed an enhanced systematic review methodology
(Implement Sci 2: 2015) to identify quantitative instruments of implementation
outcomes relevant to mental or behavioral health settings. METHODS: Full details
of the enhanced systematic review methodology are available (Implement Sci 2:
2015). To increase the feasibility of the review, and consistent with the scope
of SIRC, only instruments that were applicable to mental or behavioral health
were included. The review, synthesis, and evaluation included the following: (1)
a search protocol for the literature review of constructs; (2) the literature
review of instruments using Web of Science and PsycINFO; and (3) data extraction
and instrument quality ratings to inform knowledge synthesis. Our evidence-based
assessment rating criteria quantified fundamental psychometric properties as well
as a crude measure of usability. Two independent raters applied the
evidence-based assessment rating criteria to each instrument to generate a
quality profile. RESULTS: We identified 104 instruments across eight constructs,
with nearly half (n?=?50) assessing acceptability and 19 identified for adoption,
with all other implementation outcomes revealing fewer than 10 instruments. Only
one instrument demonstrated at least minimal evidence for psychometric strength
on all six of the evidence-based assessment criteria. The majority of instruments
had no information regarding responsiveness or predictive validity. CONCLUSIONS:
Implementation outcomes instrumentation is underdeveloped with respect to both
the sheer number of available instruments and the psychometric quality of
existing instruments. Until psychometric strength is established, the field will
struggle to identify which implementation strategies work best, for which
organizations, and under what conditions.
|*Diffusion of Innovation
[MESH]
|Evidence-Based Practice
[MESH]
|Humans
[MESH]
|Mental Health Services/*organization & administration
[MESH]