Scaling up
From TrialTree Wiki
Scaling Up
Designing a trial for scaling up focuses on assessing how an intervention can be effectively implemented and expanded to broader populations or settings. This type of trial emphasizes real-world applicability and addresses challenges related to implementation, sustainability, and generalizability. The primary objective is to evaluate both the effectiveness and feasibility of the intervention at scale, often guided by two central questions: Does the intervention work under routine, real-world conditions? And what factors influence its successful scale-up?
Study Design Considerations
Choosing an appropriate study design is critical. Common designs include pragmatic randomized controlled trials (pRCTs), which test interventions under routine conditions using broadly representative populations. Stepped-wedge cluster randomized trials are also popular; in this design, all clusters (e.g., clinics or communities) receive the intervention, but at different time points, enabling both control and intervention comparisons. Hybrid effectiveness-implementation trials offer an integrated approach, evaluating both health outcomes and implementation strategies. These are classified as Type I (emphasis on effectiveness), Type II (equal emphasis), and Type III (emphasis on implementation).
Selecting and Adapting the Intervention
The chosen intervention must be scalable—feasible, cost-effective, and adaptable to local contexts. Core components that define the intervention’s effectiveness should remain consistent, while other elements can be adapted to suit specific settings. For instance, a digital health tool may require stable internet access as a non-negotiable feature, but interface language or design could vary by region.
Outcomes: Effectiveness and Implementation
Both effectiveness and implementation outcomes should be assessed. Effectiveness outcomes include clinical endpoints (e.g., reduced disease incidence, improved quality of life) and cost-effectiveness. Implementation outcomes can be structured using the RE-AIM framework: Reach (extent of population engagement), Effectiveness (impact on primary outcomes), Adoption (number of sites or individuals using the intervention), Implementation (fidelity, adherence, and adaptations), and Maintenance (long-term sustainability).
Randomization and Sampling Strategies
Cluster-level randomization is commonly employed, using units such as hospitals, communities, or regions. It is important to ensure heterogeneity among clusters to assess generalizability. Stratification or balance procedures may be used to reduce bias across trial arms.
Flexibility and Contextual Adaptation
Planning for adaptation is essential in scale-up trials. Local variations should be expected and embraced, provided they do not compromise the intervention’s core features. Collecting contextual data—such as leadership structure, resource availability, or policy environment—helps interpret variations in outcomes.
Data Collection and Monitoring
A mixed-methods approach is typically used, combining quantitative measures of health outcomes with qualitative insights into the implementation process. Regular monitoring, site visits, interviews, and process evaluations can help document fidelity, barriers, and facilitators to implementation.
Economic Evaluation
Economic evaluations are vital for informing scale-up decisions. These include cost-effectiveness analysis and budget impact analysis, comparing the cost of scaling with potential health gains and financial feasibility within health systems.
Ethical and Logistical Considerations
Stakeholder and community engagement is key to trial success and sustainability. Trials should ensure equitable access to interventions and include a plan for continuing the intervention post-trial. Ethical oversight is also needed to maintain transparency and fairness, especially in resource-limited settings.
Analysis and Reporting
Effectiveness outcomes should be analyzed using an intention-to-treat approach. Implementation results can be interpreted using established frameworks such as RE-AIM or the Consolidated Framework for Implementation Research (CFIR). Findings should be disseminated to stakeholders—including policymakers, practitioners, and community leaders—to inform decisions about wider adoption.
Example Workflow
A trial aiming to scale up a mobile health application for diabetes management across multiple primary care clinics may use a stepped-wedge cluster RCT design. The primary outcomes could include improvements in blood sugar control, adoption rates across clinics, and fidelity to app use. The evaluation would employ mixed methods, combining quantitative outcome analysis with qualitative interviews to understand user experience and implementation challenges.
Bibliography
- Milat AJ, Bauman AE, Redman S. A narrative review of research impact assessment models and methods. Health Research Policy and Systems. 2015;13:18. Includes discussion on scaling up evidence-based interventions from RCTs.
- Bonell C, Jamal F, Melendez-Torres GJ, Cummins S. ‘Dark logic’: theorising the harmful consequences of public health interventions. Health & Place. 2015;33:44–49. Highlights complexities in scaling interventions tested in RCTs.
- Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655.
- Aarons GA, Sklar M, Mustanski B, Benbow N, Brown CH. “Scaling-out” evidence-based interventions to new populations or new health care delivery systems. Implementation Science. 2017;12:111.
- Yamey G. What are the barriers to scaling up health interventions in low and middle income countries? BMJ. 2012;347:f6549.
Adapted for educational use. Please cite relevant trial methodology sources when using this material in research or teaching.