Coaching for Communities
*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the National Mentoring Resource Center website.
In considering the key takeaways from the research on this program from the UK that other mentoring programs here in the United States can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “Promising” (that is, a program that shows some evidence that it achieves justice-related and other goals when implemented with fidelity).
1. The value of process evaluation as a precursor to outcome evaluation.
Perhaps the most interesting wrinkle of the evaluation of Coaching for Communities (CfC) is the two distinct phases of the work, with one having a major influence on the other. The team conducting the evaluation realized early in their observation of the program and the planning for the evaluation that the program was being implemented inconsistently across multiple sites and that this would make it challenging to determine how well the program was working. These inconsistencies covered everything from how youth (and which youth) were referred to the program, a burdensome and lengthy assessment process that left youth and parents feeling discouraged about the program, a lack of a theory of change that would inform the content of the mentoring activities, and the lack of a clear program manual to guide the actions of staff and the volunteer mentor role.
So rather than dive directly into an outcome evaluation that would likely yield little information of value, the research team spent considerable time working with program staff to address these gaps and in their words “tighten up” the program before moving on with any kind of evaluation of outcomes. Other programs are well advised to build this type of “tune-up” into any evaluation planning. Not only can it address gaps in service delivery that might influence eventual outcomes, but it can also ensure that processes are in place to ensure proper data collection and true “apples-to-apples” comparisons across multiple sites of the same program. It also ensures that the program is being studied in as close to an idealized version as is possible at the time, an important factor when trying to answer questions about whether the program, as conceived, is a worthy investment.
2. Sometimes the curriculum matters as much as the personal support of a mentor.
Coaching for Communities offers an interesting model for reaching young people: a five-day residential stay during which the youth put in a lot of time learning about themselves and building prosocial skills, followed by 9 months of follow-up with a community mentor and a monthly meeting where the pair also meets with program staff to engage in more teaching and learning and work toward youth-identified goals. What practitioners in other programs may want to replicate or at least mirror in their own work with youth exhibiting problem behavior is the content of the “coursework” the youth engaged in during the residential phase and beyond. While the evaluation report doesn’t do into too much detail about the “distinction-based learning” that happens in the program, it does describe some of the topics covered: a youth’s relationship to rules, the importance of giving and keeping one’s word, learning from past experience, distinguishing fact from interpretation, the power of past experiences on actions taken in the present, handling breakdowns and crisis moments better, and, most appropriate given the nature of the program, the role and value of a coach. These topics seem designed to get youth thinking about their own decision-making processes, how they respond to stressful situations, how they can better navigate and understand relationships with others, and the critical step of understanding how their own pasts and trauma can influence their reactions and motivations today. These all seem like critical things to address when working with young people who have already had some involvement in delinquency and who could easily repeat their mistakes without a better understanding of their own selves and an examination of what leads to their occasional antisocial behaviors.
The program supplements this residential phase teaching with ongoing curriculum-driven content at the monthly follow-up meetings youth attend with their mentors, where topics such as personal aspirations, teamwork, self-expression, and relationship building are covered.
The program offers a pretty high-touch approach to the mentor’s support, with the mentor checking in a minimum of three times a week, either in-person or over phone/text/email. That’s a lot of contact. Unfortunately, neither the volume nor quality of interactions with the mentor were predictive of who appeared to benefit from the program or how much. However, the number of trainer-led monthly meetings the young person attended did show an association with improvements in a number of key outcome areas. This implies that the parts of the program in which youth were exposed to the curriculum and the guided topics did play a role in changing their attitudes and behaviors. Many practitioners firmly believe that in mentoring the relationship itself is the intervention. And while that may be true to a large degree, it’s worth remembering that youth who are experiencing negative circumstances and challenges in navigating them successfully will probably need more than a friend. They likely will need opportunities to learn and grow and change their thinking in the ways that this curriculum was designed to facilitate.
3. Using evaluation to think about the optimal client.
The evaluation report for this program has a great discussion section where the authors try and make sense of the findings. Why did the program work for some youth but not others? And why did some outcomes shine through while others showed no impact from the services provided? Ultimately, they conclude that the program may work best for youth who are early in their involvement in juvenile justice, or as they put it in the report, “are displaying high levels of anti-social behaviour at home, in school or in the community but who have not yet been excluded or arrested and have not yet developed a persistent use of drugs, alcohol or other substances.” They go on to conclude that “targeting the programme at young people whose anti-social behaviour is the product of low self-esteem, poor affect and low emotional well-being (three areas where CfC makes a difference) would seem advisable.”
These types of evaluations often find subgroup differences that indicate the program is working for some youth better than others. Programs are then faced with a choice about strengthening aspects of the program so that is reaches those who are not benefitting or admitting that the program is really best designed for some youth and prioritizing their recruitment and involvement moving forward. And while the authors of the study recommend doing the latter here so that the program can have maximum impact on at least some youth, they also recognize that this approach may not be possible given the funding environment in which the program is operating. They note that the government agencies that fund this and similar programs tend to “expect provider agencies to be flexible, altering the target group to suit local needs, and adapting the programme to fit with broader children’s services provision.” They further note that this flexibility, while appealing on paper, will fundamentally “alter the underlying logic model and greatly reduce the potential impact on the outcomes for children and young people.”
We see this tension play out in much of the mentoring world in the United States. We have many programs that serve a wide variety of youth with similarly varied levels and types of needs and these programs often have trouble demonstrating their impact in evaluations seemingly at least in part because they are trying to be all things to all people. More focused programs working with specific youth populations on a narrower range of issues often show impressive results, but have questions about scalability and efficient or equitable distribution of scarce resources. It’s interesting in the case of Coaching for Communities that the authors note that the success of the program in this evaluation might deepen the investment in the program, but also cause it to be replicated somewhat carelessly without fidelity to what was proved to “work” here.
This is a conundrum that all practitioners should think about when heading into an evaluation. If we get good results only for some youth, how will we handle that? Will our funders understand that nuance? How can we then expand or change responsibly in the wake of such findings? The study authors end with a salient point around this that both practitioners and funders in mentoring would be wise to heed:
“Many providers are, like Youth at Risk, small charitable organizations whose survival depends on decisions made by more powerful government authorities. Short-term bravery to find out if their interventions are effective is possible when funded by philanthropy. But long-term and sustained change requires a change in approach by the people who decide on how to use programmes like CfC.”
For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources for Mentoring Programs" section of the National Mentoring Resource Center site.