Innovative study designs examine implementation and generate new knowledge.
Takeaways:
- Some organizations struggle to meet Magnet® nurse research requirements.
- Many research efforts result in applying evidence or quality improvement efforts but don’t create new knowledge.
- Pragmatic dissemination and implementation studies differ from traditional research by combining efficacy, effectiveness, and implementation to quickly translate evidence into clinical practice.
The American Nurses Credentialing Center’s 2019 Magnet® Application Manual requires evidence of at least one ongoing and two completed research studies. Some organizations have struggled to meet this new requirement. Magnet encourages the strong presence of clinical nurses in all aspects of practice, including research, but most nurses without a graduate degree are more comfortable implementing evidence as opposed to generating it. The result is that many research endeavors lead to applying evidence or quality improvement efforts but don’t create new knowledge. We propose that pragmatic dissemination and implementation hybrid research designs (also referred to as pragmatic studies and pragmatic research) may support the Magnet recognition research requirement. This approach examines implementing evidence-based practice and generates new knowledge.
Traditional vs. pragmatic research
Traditional research design uses a linear approach across consecutive studies, first examining efficacy, then effectiveness. Efficacy studies examine how an intervention performs under ideal circumstances, usually within randomized controlled trials (for example, one group in a study receives a drug while the control group receives a placebo). These studies answer the question “Did the intervention work?” Effectiveness studies examine how an intervention performs in a real-world setting and answer the question “How well did the intervention work?”
Pragmatic dissemination and implementation hybrid designs combine efficacy-effectiveness or efficacy-effectiveness-implementation to accelerate the translation of evidence into clinical practice. They simultaneously examine the effect of implementing an intervention in a real-world population and set- ting, providing evidence of both the effect of an evidence-based practice on patient outcomes and how it was best applied. The Department of Veterans Affairs Quality Enhancement Research Initiative reported that implementation of evidence-into-practice took 7 years using pragmatic research compared to 17 years using traditional research.
Pragmatic research has several characteristics that distinguish it from traditional research. It addresses operational questions important to stakeholders such as clinicians and administrators; it takes place where patients receive care, accounting for the complexity of a real-world clinical setting; it considers patient, staff, and setting characteristics that can create implementation challenges; it com- pares to real alternatives rather than a placebo; and it examines whether an intervention is feasible (Can it be done by the clinicians or patients in that setting?), generalizable (Does it work in various settings and populations?), and effective (Does the intervention improve an outcome when applied in practice?). Traditional research generally answers questions about what to do for patient care, while pragmatic research focuses on how to carry out best practice standards and their relevance to the clinical setting in which the research is conducted.
Pragmatic studies have three hybrid design types, which include 10 features that make them a good fit for nurses conducting research. (See Hybrid design types.) Implementing pragmatic studies requires understanding methodology, evaluation measures, and how to leverage partnerships.
Pragmatic study methodology
Methodologies common to pragmatic studies include cluster randomization, stepped wedge cluster randomization, and pragmatic comparison group.
Cluster randomization
Cluster randomization is used at the setting or group level. It increases efficiency, decreases contamination risk, and is more likely to generate participation. However, randomizing a small number of clusters with high variability among settings or populations can lead to reduced statistical accuracy due to an imbalance among the groups. For ex- ample, the care of older patients isn’t always comparable to a surgical unit with younger patients.
EXAMPLE: A setting used in cluster randomization may be a unit to examine fall rates or a clinic to examine patient wait time. A group may be intensive care and operating room nurses to examine care transitions or patients with heart failure or cancer to examine depression.
Stepped wedge cluster randomization
Stepped wedge cluster randomization involves sequential rollout of an intervention. With this technique, all groups (clusters) are monitored during implementation, but the rollout is phased. The first cluster is provided with the intervention immediately, with another cluster receiving the intervention after a waiting period (for example, 3 months). Crossover of clusters from control to intervention continues (with the waiting period in between) until all clusters participate in the intervention. Data obtained during the waiting period can be used for comparison and modifications as needed.
Careful planning is needed to ensure changes aren’t made to the intervention during the initial rollout; modifications should be thoughtfully made and introduced at subsequent rollouts.
This design is particularly suited to quality improvement projects, but it’s vulnerable to time varying confounding, which occurs when a patient’s condition or treatment varies over time because of his or her health status.
EXAMPLE: A hospital might examine fall rates like this: a standard fall-prevention bundle is rolled out for 6 months on five units, the need for enhanced physical therapy is discovered, the enhanced bundle is rolled out on five different units for 6 months, and then the fall rates among the 10 units are compared.
Pragmatic comparison group
Pragmatic comparison group or control condition design usually tests alternatives in intensity, intervention content and cost, modality, or other dimensions to increase study value. Because minimal intervention is needed for change comparisons, these studies are economical to maximize feasibility and applicability. This design allows for assessing whether a more complex and costly intervention is worth the expense.
EXAMPLE: A pragmatic comparison group design might compare the delivery of an intervention to promote medication adherence by a nurse, which is costly, or with a smartphone reminder, which is low cost.
Evaluation measures
Pragmatic evaluation measures focus on evaluating context, implementation strategies, and outcomes to examine the success or failure of the strategies. Mixed methods approaches, such as a patient survey about perception of care with both closed and open-ended questions, often are used. Outcomes include accept- ability, adoption, appropriateness, feasibility, cost, penetration, and sustainability. The Society for Implementation Research Collaboration Instrument Review Project has a repository of outcome measures that may be helpful to organizations designing research studies (societyforimplementationresearchcollaboration.org/sirc-instrument-project).
Partnerships
Pragmatic dissemination and implementation hybrid research studies capitalize on efforts that are already underway within organizations. One way to do that is by partnering a healthcare system pursuing Magnet recognition with a college of nursing that has a doctor of nursing practice (DNP) program. Many DNP projects are designed using the pragmatic dissemination and implementation hybrid approach, so the students employed by the hospital could conduct their research under the guidance of the organization’s research nurse or clinical nurse specialists, focusing on health system priorities and including clinical nurse involvement in all stages of the project. Pragmatic research studies examine outcomes that also may be used to support Magnet sources of evidence with empirical outcome requirements.
Improving practice
Pragmatic implementation and dissemination hybrid study designs can be feasible to implement, completed expeditiously, and produce meaningful findings that improve the practice environment. (See Pragmatic research in action.) Given the clinical focus of most nursing research in Magnet-recognized organizations, these studies have the potential to make a big impact on individual organizations and on population health outcomes across the care continuum.
Sandra L. Spoelstra is an associate professor at Grand Valley State University Kirkhof College of Nursing in Grand Rapids, Michigan. Jennifer Kaiser is a senior health systems research nurse at Spectrum Health in Grand Rapids, Michigan. Marie Vanderkooi is an associate professor at Grand Valley State University Kirkhof College of Nursing.
Select References
American Nurses Credentialing Center. 2019 Magnet® Application Manual. Silver Spring, MD: ANA Enterprise; 2017.
Battaglia C, Glasgow RE. Pragmatic dissemination and implementation research models, methods and measures and their relevance for nursing research. Nurs Outlook. 2018;66(5):430-45.
Bernet AC, Willens DE, Bauer MS. Effectiveness-implementation hybrid designs: Implications for quality improvement science. Implement Sci. 2013;8(Suppl 1):S2.
Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: Combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217-26.
Glasgow RE, Rabin BA. Implementation science and comparative effectiveness research: A partnership capable of improving population health. J Comp Eff Res. 2014;3(3):237-40.
Graystone R. 2019 Magnet® Application Manual raises the bar for nursing excellence: Revisions to the manual clarify the value of nursing across all healthcare settings. Am Nurse Today. 2018;13(1):48-9.
Jones TL. Outcome measurement in nursing: Imperatives, ideals, history, and challenges. Online J Issues Nurs. 2016;21(2). ojin.nursingworld.org/MainMenuCategories/ANAMarketplace/ANAPeriodicals/OJIN/TableofContents/Vol-21-2016/No2-May-2016/Outcome-Measurement-in-Nursing.html
Kilbourne AM, Elwy AR, Sales AE, Atkins D. Accelerating research impact in a learning health care system: VA’s quality enhancement research initiative in the Choice Act era. Med Care. 2017;55(Suppl 7):S4-12.
King KM, Thompson DR. Pragmatic trials: Is this a useful method in nursing research. J Clin Nurs. 2008;17(110):1401-2.
Pintz C, Zhou QP, McLaughlin MK, Kelly KP, Guzetta CE. National study of nursing research characteristics at Magnet®-designated hospitals. J Nurs Adm. 2018;48(5):247-58.
Titler MG. Methods in translation science. Worldviews Evid Based Nurs. 2004;1(1):38-48.
Weiss ME, Bobay KL, Johantgen M, Shirey MR. Aligning evidence-based practice with translational research: Opportunities for clinical practice research. J Nurs Adm. 2018;48(9):425-31.