In 2016, CNCS and Mathematica conducted the Scaling Evidence-Based Models (SEBM) project to deepen the agency’s understanding of the most effective program innovations and its knowledge base on scaling them.
 
Goals:
Using a scaling framework as a guide, Mathematica conducted a process study of three CNCS-funded grantees and their partners to learn:
  • How they scaled their interventions
  • What factors facilitated or hindered scaling
  • The conditions required for successful scaling
 
Research Questions:
The research questions are:
  • How did selected grantees define and operationalize scaling?
  • How did selected grantees scale evidence-based interventions?
 
Findings:
The case study found the following:
  • To many grantee personnel, activities we identified as scaling were part of normal program operations to respond to community needs and improve the success of their intervention. As a result, they did not appear to view scaling as different from business as usual.
  • Organizational leadership might take into consideration the characteristics and backgrounds of their personnel when adapting to serve a new target population. Employing those with similar characteristics to those they intend to serve may help to reach that population.
  • Organizations often used multiple funding sources to support scaling. This can lead to the duplication of data collection efforts in order to satisfy the funding sources’ different requirements.
  • The timing of funding availability might induce a grantee to condense its preferred planning period to use a schedule that is challenging given the intervention features.
  • Grantees reported three logistical challenges with training grantee staff and AmeriCorps members that arose directly or indirectly due to scaling:
    • Replicating to new sites meant training sometimes needed to be decentralized and conducted in more than one location, which could lead to a lack of uniformity of the training.
    • Individuals often needed additional information or training specific to their site, which also needed to align with the materials to implement the intervention.
    • Sites often had to juggle trainings for new and existing personnel while actively implementing their programs and serving participants
  • While informal and in-person communication was possible when the grantee team implementing the intervention was small, scaling led to the need to formalize communication procedures and reliance on either multiple small-group meetings at sites or technological solutions like video chatting.
  • None of the three grantees had a formal continuous quality improvement (CQI) process.
  • Generally, personnel across the grantees felt supported by their organizational leaders. The leaders were trusted and often worked to extend trust and collaboration between staff and AmeriCorps members across the programs they operated.
  • Intervention developers helped to ensure fidelity to the intervention model and were involved in innovative adaptations.
For more information, download the case study.
 

Scaling Evidence-Based Models (SEBM) Project

 The Office of Research and Evaluation (ORE) initiated the Scaling Evidence-Based Models project to support the scaling of effective interventions. This case study is part of ORE’s Scaling Evidence-Based Models project, which includes additional resources that contribute to the study and application of scaling effective interventions. Below are additional scaling resources:
 
Guides:
  • Scaling an Intervention: Recommendations and Resources: The guide provides five key recommendations that will help funders like AmeriCorps, other government agencies, and philanthropic organizations identify which funded interventions are effective, enhance their knowledge base on scaling them, and pursue scaling.
  • Baseline Equivalence: What it is and Why it is Needed: This guide is designed to help practitioners and researchers work together to design an impact study with baseline equivalence and in turn learning how to determine if an impact study is likely to produce meaningful results.
  • What Makes for a Well-Designed, Well-Implemented Impact Study: This guide is intended to help practitioners ensure that their evaluators produce high-quality impact studies.
  • How to Structure Implementation Supports: This guide will help practitioners develop formal strategies (also known as implementation supports) to help consistently deliver an intervention as it was designed, which is especially helpful for organizations scaling an intervention and assessing implementation fidelity.
  • Build Organizational Capacity to Implement an Intervention: This guide will help practitioners prepare to implement their desired intervention through building organizational capacity, which involves establishing the organizational structure, workforce, resources, processes, and culture to enable success.
  • How to Fully Describe an Intervention: This guide is intended to help practitioners to thoroughly describe their intervention and communicate the following to potential funders or stakeholders.
  • Making the Most of Data: This guide will help practitioners maximize the use of their intervention data to help their organizations improve program implementation and provide evidence to funders about effectiveness.
  • Scaling Evidence-Based Models: Document Review Rubrics: The guide is a two-part rubric for systematically reviewing documents that will help practitioners to identify the critical components of intervention effectiveness and describe plans for scaling the effective intervention.

Tools:

  • Scaling Checklists: Assessing Your Level of Evidence and Readiness (SCALER): This report describes a framework that identifies how organizations can improve both their readiness to scale an intervention and the intervention’s readiness to be scaled, so that intervention services are best positioned to improve outcomes for a larger number of participants. Each checklist in the SCALER provides summary scores to reflect how ready an intervention and organization might be for scaling.

Reports:

Case Studies:

Further information

Program/Intervention
Best Practice Dissemination, Scaling
Implementing Organization
CNCS Office of Research and Evaluation Commissioned Report
Intermediary(s)

Parent Possible, Child Abuse Prevention Council (CAPC), United Way of Iowa

Age(s) Studied
0-5 (Early childhood)
6-12 (Childhood)
Focus Population(s)/Community(s)
Schools
Rural
Suburban
Urban
Outcome Category
School readiness
K-12 success
Improving AmeriCorps
Study Type(s)
Case Study or Descriptive
Researcher/Evaluator
Mathematica
Published Year
2020
Study Site Location (State)
California