Epidemiologist

Epidemiologist
Epidemiologists help with study design, collection and statistical analysis of data, and interpretation and dissemination of results (including peer review and occasional systematic review). Epidemiology has helped develop methodology used in clinical research, public health studies and, to a lesser extent, basic research in the biological sciences

Selasa, 08 Oktober 2013

The importance of quality implementation for research, practice, and policy

The importance of quality implementation for research, practice, and policy


Executive Summary

The steps for implementing a program for children and youth may seem straightforward: identify a need, hire staff and provide the service or product to a target population. However, implementing programs that work requires careful advance planning, the involvement of multiple stakeholders, and a process that ensures accountability. When programs are implemented poorly, it not only reduces the potential for helping children and youth in need, but it wastes scarce public resources because poorly implemented programs are unlikely to be very successful.  In addition, when a program is implemented poorly, we don’t know whether or not it works.

Research on quality program implementation has identified a number of factors that can significantly improve implementation process to increase the effectiveness of programs. This issue brief discusses some of the fundamentals of quality program implementation that have been identified through research and practice and that may be useful for practitioners, policymakers and researchers alike.

This brief defines quality program implementation, and highlights the importance of a high quality implementation, identifies 23 factors that affect implementation, discusses 14 steps in achieving quality implementation (10 of which need to occur before a program starts), and notes that responsibility for quality implementation is shared by key stakeholders. The factors that can affect implementation quality range from societal, community, program, practitioners, and organizational influences, as well as the implementation process itself.  The brief explains how implementation should focus on core components, allowing adaptation of other aspects to suit the population and setting.

1.      Public policy decisions should be based on evaluations of programs that have been implemented with quality. Otherwise, the relative value and cost-effectiveness of alternative programs cannot be determined.

2.      Implementation is important for all child and youth programs and increasing the quality of implementation increases the chances that the program will yield its intended outcomes.

3.      It is possible to adapt an evidence-based program to fit local circumstances and needs as long as the program’s core components, established by theory or preferably through empirical research, are retained and not modified.

4.      High quality implementation is the joint responsibility of multiple stakeholders who typically include funders/policy makers, program developers/researchers, local practitioners, and local administrators.

Although there are many factors that can affect quality of implementation and multiple steps in the implementation process, success is possible, resources are available to help select and implement evidence-based programs effectively.

 


Introduction
Sometimes, program evaluations report no difference in outcomes between persons given a program and those not given the program.  Is this because the program does not work, or because it was poorly implemented?  Achieving high quality program implementation is critical to achieving anticipated outcomes, and researchers have made considerable progress in clarifying its importance in the past several decades

This brief defines program implementation, highlights the importance of high quality implementation, identifies key factors that affect implementation, presents the steps involved in achieving quality implementation, and specifies who has responsibility for quality implementation. The last section describes some practical lessons that have been learned about implementation through systematic research and practice.  The focus here is on evidence-based programs, although implementation is relevant in all program operations and evaluations. Whenever any program is being conducted, it is important to monitor the level of implementation that has been achieved so its impact can be interpreted appropriately.

It is assumed that, for the general public welfare, societies strive toward the fairest allocation of public resources to as many in the population as possible.  However, resources are always limited in some way.  Usually, important either-or decisions must be made.  Should we support this program or an alternative program?  Should we introduce a new program or continue with services as usual?  These decisions should be made in reference to how well a program has been implemented, in addition to evidence of the program’s effectiveness.

Society experiences serious short- and long-term costs when programs are poorly implemented. The money, resources, and staff time associated with poorly implemented programs are not well spent because poorly-implemented programs are unlikely to be very successful.  The decision making process regarding the fairest and most effective allocation of limited social resources is also compromised when the potential impact of programs cannot be determined because implementation is poor. Too often, interpretations of evaluation findings are limited at best because the program was not well-implemented.  Poorly implemented programs can mislead decision-makers into assuming that a program is ineffective when in reality the program might work very well if it were well-implemented. In sum, a focus on implementation advances research, practice, and policy, and leads to better services within our communities, and better outcomes for children and youth.

Although various definitions of implementation exist, the one presented by Damshroder and Hagedorn (2011) is used here: “Implementation refers to efforts designed to get evidence-based programs or practices of known dimensions into use via effective change strategies” (p. 195). Extensive experience indicates that when evidence-based programs are attempted by a new organization, in a new setting, or by new staff, they are not automatically reproduced or replicated with the quality intended by the program developers.  For a variety of reasons, major changes can occur, so that the new program may not be an accurate reproduction of the core components of the original version.

The gap between how a program is intended by its designers to be delivered and its actual delivery in practice is referred to as implementation variation.  Implementation may vary from strict adherence to program protocols as designed to subtle or major changes in program protocols.  The challenge is to implement a program with sufficient quality to obtain the outcomes found in original trials.  In other words, implementation exists along a continuum and one can think of poor, medium, or high quality implementation.  The emphasis here is on high quality because implementation to this degree increases the chances of obtaining the outcomes found in original trials.     

Evidence for the importance of high quality implementation has been obtained in multiple areas including education, mental health, health care, technology, industry, and management (Durlak & Dupre, 2008; Fixsen, Naoom, Blase, Friedman, & Wallace, 2005).  Moreover, implementation is important regardless of characteristics of the target population, the type of program, and specific program goals. 

Research clearly indicates that the quality of program implementation is one critical factor associated with youth outcomes.  For example, one review of school-based prevention programs found that implementation quality was the most important program feature associated with reducing aggressive behavior (Wilson, Lipsey, & Derzon, 2003).  In many cases, programs have failed to achieve their intended outcomes for youth when implementation was poor whereas, in other cases, program impact was much higher when there were reports of more effective implementation (Durlak & Dupre, 2008).  In other words, participants may receive more benefits as a result of better program implementation, or they may receive no significant benefit if program implementation is poor.

Additional research findings indicate the importance of high quality implementation. In reviews of bullying prevention programs (Smith, Schneider, Smith, & Ananidou, 2004) and youth mentoring programs (DuBois, Holloway, Valentine, & Cooper, 2002), authors have  compared outcomes for youth who had participated in programs that varied in the quality of their implementation.  Compared to participants in programs that were poorly implemented, youth who had been in programs that had been implemented with higher quality demonstrated two or three times as much benefit on outcomes such as increased social competence and lower levels of bullying.

Still another example illustrates the importance of quality implementation in affecting critical youth outcomes.  In a large-scale review of school-based programs involving over 200 studies and over a quarter of a million youth, the benefits demonstrated by students receiving  programs associated with higher quality implementation were compared to those participating in programs that were implemented with poorer quality (Durlak, Weissberg, Dymnicki, Taylor & Schellinger, 2010).  The former students showed gains in academic performance that were twice as high as the latter group; furthermore, the students in the better implemented programs also showed a reduction in emotional distress (e.g., depression and anxiety) that was more than double the reduction shown by the latter group and a reduction in levels of conduct problems that was nearly double that of the latter group.  In other words, effective implementation can lead to larger gains for youth in several important domains of adjustment.  With poor implementation, you may get no or just a small amount of change; with effective/high quality implementation, you may get changes of larger magnitude.  The above data indicate it is clearly worthwhile to strive for high quality implementation.

Table 1. Twenty-three Factors that Affect Implementation
Community-wide or societal factors
1.        Scientific theory and research
2.        Political Pressures and Influences
3.        Availability of funding
4.        Local, State or Federal Policies
Practitioner characteristics
5.        Perceived need for the program
6.        Perceived benefits of the program
7.        Self-efficacy
8.        Skill proficiency
Characteristics of the program
9.        Compatibility or fit with the local setting
10.     Adaptability
Factors related to the organization hosting the program
11.     Positive work climate
12.     Openness to change and innovation
13.     Integration of new programming
14.     Shared vision and consensus about the program
15.     Shared decision-making
16.     Coordination with other agencies
17.     Openness and clarity of communication among staff and supervisors
18.     Formulation of tasks (workgroups, teams, etc.)
19.     Effective leadership
20.     Program champion (internal advocate)
21.     Managerial/supervisory/administrative support
Factors specific to the implementation process
22.     Successful training
23.     On-going technical assistance


The importance of implementation quality is widely recognized in the medical field, and drug treatment for medical conditions offers a useful analogy: The correct drug must be given and in sufficient dosage to obtain the desired effect.  Moreover, there is always a need to monitor drug use because many patients do not follow the prescribed drug regimen.  When drug monitoring occurs, changes can be quickly made so the effect of the drug can be accurately assessed.  Otherwise, the physician cannot determine if the use of a particular drug is having the intended effect.

The same goes for any evidence-based program in the area of human services.  It is important to ensure an evidence-based program is implemented with high quality in order to achieve the intended effects.  This means we must periodically monitor program implementation so we can make adjustments as needed to help ensure high-quality implementation.  For example, an evidence-based program may be unsuccessful in one setting due to poor implementation, but the same program may be successful in another setting when it is implemented with quality.  

In sum, implementation quality is important throughout the entire range and nature of child and youth services, whether the goal is to treat children with adjustment problems, prevent later problems, promote young people’s personal and social development, increase students’ academic performance, promote infant health, or prevent teenage pregnancy.

Of course, success is never guaranteed; if it were, then we would always know what results would occur in every situation.  The point is that quality implementation is necessary to increase the chances of being successful. In other words, “when it comes to implementation, what is worth doing, is worth doing well.”      

 Factors that Affect the Quality of Implementation

In order to understand the types of factors that influence the quality of implementation of prevention programs for children and adolescents, Durlak and DuPre (2008) conducted a systematic search of the literature.  They identified 23 factors that had received consistent support in at least five different research studies.  A list of these 23 factors, which can be divided into five major categories, is contained in Table 1.  Furthermore, a consensus is present regarding the importance and wide applicability of these potential influences.  Other reviews that have focused on health care (Greenhalph, MacFarlane, Bate, Kyriakidou & Peacock, (2005) child abuse and neglect, and domestic violence programs for adults (Stith et al. (2006), or both treatment and prevention programs for children and adults (Fixsen, Naoom, Blasé, Friedman & Wallace, 2005), have independently identified many of these same factors.

Table 2. Brief Summary of 14 Steps
and Four Temporal Phases Involved
in Quality Implementation

Phase One:  Initial Considerations
Regarding the Host Setting
Assessment Activities
1. Conduct a Needs and Resources
Assessment
2. Assess the fit of the program with the
organization
3. Conduct a Capacity/Readiness Assessment

Decisions about Adaptation
4. How Should Fidelity and Possible
Adaptations be
Decided?

Capacity-Building Strategies
5. Obtain Explicit Buy-in from Critical
Stakeholders
6. Build General/Organizational Capacity
7. Recruit Implementation Staff
8. Effective Pre-Innovation Staff Training

Phase Two:  Creating a Structure for
Implementation

Structural Features for Implementation
9. Create  Teams Responsible for Quality
Implementation
10. Develop an Implementation Plan

Phase Three:  Ongoing Structure Once
Implementation Begins

Ongoing Implementation Support Strategies
11. Technical Assistance/ Coaching/Supervision
12. Monitoring On-going Implementation
13. Supportive Feedback System

Phase Four:  Improving Future Applications
14.   14. Learning from Experience

The relative importance of each factor and how different factors may interact to influence implementation has yet to be clarified, but it is important to consider their possible relevance in each situation.  For example, some factors exist at the societal or community level such as political pressures or policy mandates, and the availability of funding; some are related to whether local practitioners perceive a need for the program and recognize its potential benefits, and other pertain to features of the organization conducting the program such as its work climate, openness to change, and task-orientation.


Because the quality of implementation is so important to program outcomes, it is essential to learn what is necessary to achieve this level of implementation. There is now convergent evidence from implementation science about how this can be accomplished.  Several authors have independently developed conceptual models or frameworks regarding how implementation can be carried out effectively based on systematic research and practice in diverse areas such as health care, education, mental health prevention, treatment for adults and children, and management ( e.g., Damshroder et al. 2009; Fixsen et al., 2005; Hall & Hord, 2006; Klein and Sorra 1996; Spoth, R., Greenberg, M., Bierman, K., & Redmond, C. (2004).

Meyers, Durlak and Wandersman (in press) synthesized this literature and found there was consensus regarding 14 steps that were related to quality implementation, and they created the Quality Implementation Framework (QIF) to describe these steps.  The QIF, which is divided into a four-phase temporal sequence, and also contains information on the major goals that should be accomplished at each step is presented in Table 2.

It is important to consider and effectively address each step in the implementation process.  For example, before implementation begins, it is important to assess such issues as how well the program fits the setting, if staff holds realistic expectations about what can be achieved, whether there is genuine buy-in or acceptance for the new program, and how to train staff effectively for their new roles.  Once implementation begins, on-going technical assistance is needed to help staff implement with quality.  It is also essential to develop and maintain a good monitoring and feedback system during implementation (Steps 12 and 13 in Table 2).  This is because implementation often varies over time: sometimes quality drops and other times it increases.  Both types of changes have implications.  If implementation drops to too low a level after a good start, there is a need to intervene quickly through professional development activities to improve implementation.  Such a drop may also signal a need to re-examine whether commitment, support and enthusiasm still exist for the new program, and what steps might be taken to rekindle the initial interest and support of the organization and its staff.

Increases in implementation have been noted in longer and complex programs in which it may take more than a year to achieve quality implementation.  Therefore, patience is required in estimating the true value of some programs.  Depending on how complicated and comprehensive a program is, it may take up to 3 years before quality implementation can be achieved (Goldstein, 2011).  Therefore, one cannot assume that the level of implementation displayed during the early stage of a program will be the same as that achieved at the end of the program. A good monitoring and feedback system is important so that practitioners receive positive feedback about the good job they are doing, and that efforts to improve implementation can be made quickly if needed.

As reflected by the Quality Implementation Framework, systematic research and practice in implementation science have indicated that quality implementation:

·         Is a systematic process of coordinated steps; quality implementation can be achieved with careful planning;

·         Has a temporal sequence; some things should be done before others; in fact, 10 of the 14 steps should be addressed before the program begins; and
         
·         Requires many different types of activities and skills that include assessment, negotiation, collaboration, planning, and critical self-reflection.

In sum, the time and effort required of implementation should not be rushed.  Attempts to short-change the process or omit important steps can undermine quality implementation.


The finding that at least 23 factors may affect implementation and that the implementation process involves 14 steps can seem overwhelming to those who want to conduct a new program.  However, it is important to keep two important points in mind:

1.      There are many examples of well-implemented programs.  Success is possible. 
        
2.      Implementation is a mutual responsibility shared by several groups (Wandersman, et al. 2008). Solving the challenge of quality implementation requires the active collaboration of four major groups of stakeholders:  researchers/program developers (or others who provide technical assistance), local practitioners, funders, and local administrators.   

The chances for quality implementation are enhanced when multiple stakeholders work collaboratively and approach implementation in a careful, systematic fashion over time.  See Figure 1.  

Figure 1. Collaboration Among Multiple Stakeholders Leads to Quality Implementation

http://wc1.smartdraw.com/examples/content/Examples/05_Marketing_Charts/Puzzle_Piece/Puzzle_Piece_Diagram_-_4_Piece_Diagram_L.jpg

Adaptation refers to changes made in a program when it is implemented in a new setting. Whenever programs are conducted, there is the issue of the extent to which they should be delivered as originally developed, or adapted in some way.  This is a very important issue because, when others consider using a program, there is often a question in their minds that goes something like:  “Yes, I know that program X has been effective elsewhere, but our situation here seems different. If we change the original program so it is a better fit for our circumstances, will it still be successful?”  As the science of implementation has advanced, clarity regarding this issue has emerged.

There is now agreement in implementation science that whenever the core components of a program are known (i.e., the active ingredients of a program that are primarily associated with its effectiveness), these elements should be implemented without adaptation (see accompanying ASPE Research Brief by Blase and Fixsen entitled Core Intervention Components:  Identifying and Operationalizing What Makes Programs Work).  If all the core components are not administered, then the program either will not work or will not work as well as it could.  Decisions as to what constitutes core components are challenging as research has seldom isolated these components.  Although some program designers may identify core components based upon theory alone, these assumptions are not always correct and could lead to an omission that is, in fact, an active ingredient of the program.  Decisions regarding core components should be based upon empirical findings. 

Beyond its core components, other aspects of the program can be modified to suit the setting or the population served, and this often offers possibilities for some adaptation to occur.  In other words, fidelity and adaptation are not necessarily mutually exclusive, either-or considerations, and programs can be a blend of both fidelity and adaptation.

There are many different aspects to developing a program for children or youth (e.g., home visitation, teen pregnancy prevention) that might be adapted.  For example, exercises or activities within a lesson may be modified to suit the cultural background of the participants as long as they fulfill the objective of the original lesson or the teaching point.  Other modifications might include changing the time at which the program is offered or providing repeat sessions to better fit the needs of the clients.  Depending on the circumstances, some of these elements can be adapted to fit the new setting, as long as the core components are delivered.       
           
Decisions regarding adaptation should be made collaboratively by the original program designer, or others who know the theory and central operational features of the intervention, and those hosting the new program who know their setting, the target population, and the local culture.  Otherwise, ineffective or even harmful adaptations might be made.

Collaborative working relationships are crucial for making wise decisions regarding fidelity and adaptation (Durlak & DuPre, 2008).  Depending on each unique circumstance, some changes that do not compromise the core elements of the program can be made, but improving the organization’s ability to help its clients should always be of central importance.  In other words, an organization’s primary motive for its actions should be to improve its services by offering the most effective assistance to its clientele.  Extrinsic reasons for adapting programs such as political pressure, administrative fiat, and grabbing available money are not associated with quality implementation.  Similarly, changing a program merely to save time, effort, or money is not wise.  Under these conditions, the intended outcomes may be compromised because the program’s active ingredients are either omitted or not well-implemented (Damshroder et al. 2009; Mihalic et al. 2008).  


The importance of quality implementation has been well-documented, but achieving quality is a complex and demanding process.  Nevertheless, some useful lessons have been learned in implementation science:

Implementation is rarely perfect.  Some slippage inevitably occurs when programs are conducted in new settings (Durlak & DuPre, 2008).  This need not be a major concern as long as the problems are recognized and being dealt with and implementation quality remains high enough.  There can be a variety of unanticipated implementation problems that arise related to such things as changes in leadership and staff, sudden budget re-authorizations, conflicts with transportation, scheduling, and emergencies, and competing job pressures.  Fortunately, good judgment and guidance from implementation research and practice can help anticipate and deal with the challenges that might occur.  A good monitoring and feedback system can help identify when problems may be hindering quality implementation and fixes can be made to improve implementation (e.g., DuFrene, Noell, Gilbertson, & Duhon, 2005; Greenwood, Tapia, Abbott, & Walton, 2003).  To achieve quality implementation, the process needs to be given sufficient time.  Also, public policy decisions should be based on evaluations of programs that have been implemented with quality.  Otherwise, the relative value and cost-effectiveness of alternative programs cannot be determined.

Practitioners vary in their performance when implementing new programs.  It is important to monitor each practitioner’s performance and offer additional professional development as needed.  People have different learning styles and learning curves; some can develop new skills quickly while others require more time and practice.  Some lose motivation over time and may need professional development to rekindle enthusiasm.  Others may simply not care about implementing the program and may need stronger incentives to carry out the program, or they may need to be replaced (Mihalic et al. 2008).

A pilot program is often a good idea. Because doing something new requires time and practice to achieve mastery, it may be a good idea to try a new program on a small pilot basis instead of launching into a large-scale project.  For example, the Teen Pregnancy Prevention Program, administered by the Office of Adolescent Health, allowed grantees the opportunity to use the first 12 months as a phased-in implementation period.  During this time, sites were encouraged to prepare for program implementation, including conducting a pilot (Margolis, 2011).  A pilot program can help an organization “work out the kinks” regarding implementation and plan more effectively for a later more extensive program (see Blase & Fixsen and Embry & Lipsey briefs). 

Don’t implement an evidence-based program on your own.  Advertisements demonstrating new products often carry the following admonition in various forms: “Professionals were used.  Do not try this at home.”  This caution also applies to the implementation of evidence-based programs.  One of the advantages of using an evidence-based program, compared to developing a new program, is that others have used it before and in some cases, they have developed strategies for overcoming obstacles and implementing the program effectively.  Drawing on the expertise of outside professional assistance and experience is a key ingredient in quality implementation and successful outcomes.  Evidence-based programs often come with developed training and technical assistance packages, fidelity guidelines, and monitoring processes.  Indeed, high quality implementation is the joint responsibility of multiple stakeholders that typically includes funders/policy makers, program developers/researchers, local practitioners, and local administrators.

There may be rare cases in which a brief and simple program can be learned by reading a manual or participating in a short workshop or on-line training session, but these are rare exceptions to the rule that outside assistance is needed to achieve quality implementation.  Moreover, it is wishful thinking that a few simple “magic bullets” will achieve important social goals.

Practitioners can find assistance in selecting and implementing evidence-based programs in various ways.  For example, there may be a national replication office for a specific program. Other organizations can provide materials, training, and guidance for several models and provide information about consultants who are willing to provide professional development services for various programs.  Some examples of these resources are provided in the Appendix of this report.

It is possible to adapt an evidence-based program to fit local circumstances and needs as long as the program’s core components, established by theory or preferably through empirical research, are retained and not modified.

In sum, implementation is important for all child and youth programs, and increasing the quality of implementation increases the chances that the program will yield its intended outcomes.  Many factors can affect quality of implementation, and there are multiple steps in the implementation process, so time and effort are essential to achieving quality program implementation.  However, success is possible, and resources are available to help select and implement evidence-based programs effectively.

Blase, K. A., & Fixsen, D.L. (2013). Core intervention components: Identifying and operationalizing “what works”. Washington, DC: U.S. Department of Health and Human Services.

Damschroder, L. J., Aron, D. C., Keith, R. E., Kirsh, S. R., Alexander, J. A., & Lowery, J. C. (2009). Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science, 4, 50.

Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors    affecting implementation.  American Journal of Community Psychology, 41, 327-350.      

Durlak, J. A., Weissberg, R. P., Dymnicki, A. B., Taylor, R. D., &   Schellinger, K. B. (2011). The impact of enhancing students’ social and emotional learning: A meta-analysis of school-based universal interventions. Child Development, 82, 405−433.

DuBois, D. L., Holloway, B. E., Valentine, J. C., & Cooper, H. (2002). Effectiveness of  mentoring programs for youth: A meta-analytic review. American Journal of Community Psychology, 30, 157-198.

DuFrene, B. A., Noell, G. H., Gilbertson, D. N., & Duhon, G. J. (2005). Monitoring implementation of reciprocal peer tutoring: Identifying and intervening with students who do not maintain accurate implementation. School Psychology Review, 34, 74–86.

Embry, D.D., & Lipsey, M. (forthcoming). To boldly go, where none have gone before:  Using a practical toolkit for the development, adaptation, and innovation for solving new human behavioral problems. Washington, DC: U.S. Department of Health and Human Services.

Fixsen, D. L., Naoom, S. F., Blasé, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network (FMHI Publication #231). Retrieved November 1, 2006, from http://nirn.fmhi.usf.edu/resources/publications/Monograph/pdf/monograph_full.pdf

Goldstein, N. (2011, April). A federal perspective on scale-up. Presented at the Emphasizing Evidence-Based Programs for Children and Youth Forum, Washington, DC.

Greenhalgh, T., Robert, G., Macfarlane, F., Bate, P., Kyriakidou, O., & Peacock, R. (2005). Diffusion of innovations in health service organizations: A systematic literature review. Oxford: Blackwell.

Greenwood, C. R., Tapia, Y., Abbott, M., & Walton, C. (2003). A building-based case study of evidence-based literacy practices: Implementation, reading behavior, and growth in reading fluency, K-4. The Journal of Special Education, 37, 95–110.

Hall, G. E., & Hord, S. M. (2006). Implementing change: Patterns, principles and potholes (2nd ed.). Boston, MA: Allyn and Bacon.

Klein, K. J., & Sorra, J. S. (1996). The challenge of innovation implementation. Academy of Management Review, 21, 1055–1080.

Meyers, D., C., Durlak, J. A., & Wandersman, A. (in press). The Quality Implementation Framework: A Synthesis of Critical Steps in the Implementation Process.

Margolis, A. (2011, April). Replicating evidence-based teenage pregnancy prevention    programs-A case study. Presented at the Emphasizing Evidence Based Programs for Children and Youth Forum, Washington, DC.

Smith, J. D., Schneider, B. H., Smith, P. K., & Ananiadou, K. (2004). The effectiveness of whole-school antibullying programs: A synthesis of evaluation research. School Psychology Review, 33, 547-560.

Spoth, R., Greenberg, M., Bierman, K., & Redmond, C. (2004). PROSPER community-university partnership model for public education systems: Capacity-building for evidence-based, competence-building prevention. Prevention Science, 5, 31–39.

 Stith, S., Pruitt, I., Dees, J., Fronce, M., Green, N., Som, A. et al. (2006). Implementing community-based prevention programming: A review of the literature. Journal of Primary Prevention, 27, 599-617.

Wandersman, A., Duffy, J., Flaspohler, P., Noonan, R., Lubell, K., Stillman, L., Blachman, M., Dunville, R., & Saul, J. (2008). Bridging the gap between prevention research and practice: The Interactive Systems Framework for dissemination and implementation. American Journal of Community Psychology, 41, 171–181.

Wilson, S. J., Lipsey, M. W., & Derzon, J. H. (2003). The effects of school-based intervention programs on aggressive behavior: A meta-analysis. Journal of Consulting and Clinical Psychology, 71, 136-149.


1.      Collaborative for Academic, Social, and Emotional Learning (CASEL). www.casel.org    CASEL’s main goal is to foster the implementation of evidence-based programming to enhance academic, social, and emotional learning in preschools through high schools. In doing so CASEL collaborates with program developers and consultants who offer professional development services for schools interested in implementing effective school programs. CASEL also has useful toolkits to help districts and schools select evidence-based programs and plan for implementation

2.      Safe and Supportive Schools Technical Assistance Center. http://safesupportiveschools.ed.gov. This agency helps schools select and conduct evidence-based programs; it provides general assistance and puts schools into contact with various groups that support different programs.  The S3 TA Center’s Website (http://safesupportiveschools.ed.gov) includes information about the Center’s training and technical assistance, products and tools, and latest research findings, including links to searchable lists of and information about evidence-based programs and programmatic interventions.  In particular, it includes a page on programmatic interventions at http://safesupportiveschools.ed.gov/index.php?id=32

3.      FindYouthInfo. (http://www.findyouthinfo.gov) was created by the Interagency Working Group on Youth Programs (IWGYP), which is composed of representatives from twelveFederal Departments and five Federal agencies that support programs and services focusing on youth. The IWGYP promotes the goal of positive, healthy outcomes for youth by identifying and disseminating promising and effective strategies. Its website provides interactive tools and other resources to help youth-serving organizations and community partnerships plan, implement, and participate in effective programs for youth.

4.      National Center for Mental Health Promotion and Youth Violence Website.  (http://www.promoteprevent.org).  The National Center for Mental Health Promotion and Youth Violence Prevention's (National Center) is another resource for states/districts/schools interested in researching and implementing evidence-based programs. The National Center’s overall goal is to provide technical assistance (TA) and training to school districts and communities that receive grants from the U.S. Departments of Education and Justice and Substance Abuse and Mental Health Services Administration (SAMHSA) in the U.S. Department of Health and Human Services.  The National Center offers an array of products and services that enable grantees to plan, implement, evaluate, and sustain activities that foster resilience, promote mental health, and prevent youth violence and mental and behavioral disorders.

5.      Evidence-Based Prevention and Intervention Support Center (EPIS Center).  (http://www.episcenter.psu.edu/)  The EPIS Center is a project of the Prevention Research Center, within the College of Health and Human Development at Penn State University.  It provides support for the implementation of 11 evidence-based programs with attention to providing training and technical assistance, developing resources, helping programs advocate in communities, and conducting research.

Source:http://aspe.hhs.gov/hsp/13/KeyIssuesforChildrenYouth/ImportanceofQuality/rb_QualityImp.cfm

Tidak ada komentar:

Posting Komentar