Insights for Practitioners

Insights for Practitioners (58)

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the Crime Solutions website.

In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “Promising” (that is, a program that has strong evidence that it achieved several of its justice-related goals).

1. If you’re going to do an evaluation, make sure you describe the program well when writing up the results.

This may seem like a small thing to praise this program and its evaluators for, but it was refreshing to see a mentoring program described in real detail as part of an outcome evaluation. Most published studies like this will describe the program at some level, usually focusing on the demographics of the youth served and their mentors, as well as a general explanation of what the program is trying to accomplish. But this article really sets the standard in terms of the quality and depth of information provided about how Eye to Eye selects schools, selects mentee participants within those schools, the curriculum they use and the activities matches engage in, and even concrete details about how mentors are identified and screened. Far too often, these Crime Solutions reviews and subsequent profiles provide plenty of information about the evaluation, but practitioners are often left wondering how the program achieved their results or how they might proceed if they wanted to do something similar. By providing such rich descriptive information in the study write-up itself, this program has made it much easier for other practitioners to understand the practices that might best support youth with learning disabilities (LD) and attention-deficit hyperactivity disorder (ADHD).

2. Credible messengers once again prove to be valuable assets to others.

One of the factors that seems likely to be involved in the success of the Eye to Eye program is the use of older peers who themselves have LD/ADHD diagnoses as the mentors. Using these youth, specifically, in the mentor role might be crucial in helping mentees feel understood, listened to, and able to visualize themselves being a thriving adolescent or young adult. Mentors without those diagnoses may have also been effective in working with these youth—and the evaluation here did not test this or compare these diagnosed mentors to other types of individuals. But we have seen uses of credible messengers many times before in the mentoring field and there seems to be some real validity to using mentors who know exactly what it’s like to being in their mentee’s shoes. (See the Insights provided for the Arches and My Life programs for examples of other programs using mentors who had faced similar challenges to the youth they are serving.) It’s not surprising that measures of relationship quality for these matches were quite high and correlated with stronger outcomes inn the directions of lowered depressive symptoms and increased self-esteem and personal relationships. Sometimes we learn the most from those who have travelled the same path before us. What’s especially promising about the Eye to Eye approach is that the mentors also appear likely to have gotten a lot out of the experience, taking an ownership role in how the model was implemented in their school and building leadership and project management skills. Even though the evaluation didn’t explore this aspect in detail, one can assume that serving as leaders and mentors like this may have also helped the mentors feel more positive about their own LD/ADHD circumstances.

3. When designing a curriculum, get the best help possible from experts.

One of the great debates in mentoring is whether the impact of mentoring comes from the strong relationship (ideally) formed between mentor and mentee or whether those outcomes are the result of the activities they do together. And while the answer may well be that the best impacts often come from both of those things in tandem, the reality is that for a program like Eye to Eye that is working with youth with serious disabilities and conditions, having a good curriculum that’s designed to facilitate specific learning moments and interactions tailored to those youths’ needs may be critical.

But programs often wonder how to develop a curriculum that is the right fit for what they want to accomplish with youth. Eye to Eye offers an excellent example of how other programs might want to approach this. They started by identifying core socio-emotional objectives based on a longitudinal study of youth with LD/ADHD that had previously identified success attributes that helped those youth thrive. With those objectives in hand, the program engaged a number of groups in designing relevant activities that would speak to those objectives: a team of educators with LD/ADHD themselves, a focus group of young adults, and, perhaps most crucially, faculty and graduate/postdoctoral students at Brown, Harvard, and Columbia Universities. That’s a lot of expertise and different viewpoints all contributing to the formation of these activities. In the end, the program had developed an arts-based curriculum that uses fun and creative hands-on projects to get mentors and mentees talking about strengths and challenges related to their LD/ADHD. This is an excellent example of how researchers, subject matter experts, and client voice can all be harnessed to produce something that is custom tailored to the youth that are the focus of the program. And, because Eye to Eye is so transparent about their model (see point #1 above) they even make a national office email available in the study write-up for those who want to learn more about it or adapt it for their programs.

4. If you are evaluating your program and want a comparison group, make sure you know if those youth are being mentored somewhere else.

One of the nice things about this evaluation is that they compared the outcomes of mentored youth in the program with two different comparison groups: unmentored youth with LD/ADHD at similar schools and youth at similar schools who did not have LD/ADHD. A small but important detail in setting those groups up is that they excluded youth from both groups who indicated they were receiving mentoring through some other program. In this case, they didn’t want to compare Eye to Eye against other programs or service providers, but against a hypothetical situation that isolated the influence of Eye to Eye mentors compared to very similar youth who didn’t have the services of the program.

Now, this study was not a true random assignment design—they did do this kind of purposeful restricting of who got into those comparison groups. So while this may not have been a pure experiment, the program also seems to have avoided one of the big reasons that many mentoring program evaluations struggle to show results: the comparison kids going and finding mentoring somewhere else. While youth in the comparison schools were not restricted from seeking out and receiving other services, it’s likely that few did given the somewhat novel intervention offered here, which the authors note had few comparisons in the literature.

When significant numbers of the youth who are the counterfactual to the work of the program go and get similar services from somewhere else, it can wash away differences between the two groups at a meaningful level. You are no longer comparing mentored and unmentored youth, you are comparing mentored and differently mentored, and given the tight thresholds used in the statistical analyses of these types of evaluations, that can often make the difference between having several positive, statistically significant findings and having results that look like the program achieved nothing.

Another good example of this phenomena can be found in the Insights we wrote for the SOURCE program. That program emphasizes working with youth to apply to college the following year and does a lot of work to facilitate that process and get parents on board. The program did seem to do an effective job of getting youth to apply, which was good news. However, their evaluation also found that almost 94% of their comparison group youth applied to college as well, often with the help of other services and programmatic support, either through their school or from other similar nonprofits. The end result is that it looked like there program was no better than just “business as usual.”

Now, with a goal like college planning and application in mind, it may be a good thing that such a high percentage of the comparison group did apply—after all this is their time to do it and it would be impossible to tell a family to defer that decision for a year just for an evaluation trying to test the results of one program.  But time and time again we see mentoring program evaluations that are undone by comparison groups of kids getting mentoring from other programmatic sources (sometimes in spite of promising not to). So this is a nice caution to mentoring programs and evaluators to set up your comparison groups carefully. In the case of Eye to Eye, they removed anyone already being mentored from those groups and seemed to avoid comparison youth getting similar support elsewhere, making it easier for them to show clear impacts that are more easily attributable to the work of the program. Of course, to make sure eventual comparisons are truly “apples to apples,” the programs involved need to be similarly restricted to those without pre-existing mentoring relationships, something which may be useful to also consider for other reasons such as prioritizing access to limited program slots or mentors.

For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources" section of the National Mentoring Resource Center site.

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the Crime Solutions website.

In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “No effects” (that is, a program that, based on review of available evaluation research, does not show evidence of effectiveness for influencing juvenile justice or related outcomes).

Note: While Crime Solutions does consider the two variations on this program model to be separate interventions that were reviewed independently, here we will discuss both variations of the One Summer Plus model: the one that provides youth with employment opportunities and mentorship and the other that reduces those work hours (by 40%) to allow for youth to participate in workshops and learning opportunities related to social-emotional development.

1. If you are going to do a large study, why not test multiple variations?

One of the more challenging aspects of mentoring programs for practitioners to articulate are the causal mechanisms that allow mentoring to have an impact on a young person. While it may seem intuitive that mentors positively influence young people by offering advice, helping the young expand their connections and future plans, and serving as a role model, many programs cannot fill in the blanks between those activities and the ultimate outcomes of the program—reducing crime rates among disadvantaged youth, as in the case of One Summer Plus. It can be challenging to explain the causal pathway that allows something like the dispensing of advice to change a youth’s attitudes and behaviors deeply enough that their behavior changes, over time in meaningful ways. It can be similarly difficult to describe how that pathway happens within the context of other factors happening in the youth’s home, educational, and social life. Mentoring takes place in such subtle ways and in such a complex stew of life that even seasoned practitioners and scholars can know a program “works” but not know exactly how or why.

One Summer Plus’s evaluation design offers a potential way to try and answer these questions. The authors in the main journal article cited in this review note that there are many potential factors that might lead to a program like this reducing crime (or at least arrest rates, which is the main outcome reported on in these studies): having a job simply takes youth off the streets for periods of time in which they might commit crime, having the income of a job may reduce financial incentives for certain crimes, and the positive experience of employment might lead to more positive outlooks and a recommitment to school and education. In the case of One Summer Plus, the program is premised on the idea that adding a formal mentor to that job might also lead to personal growth and a move away from criminal behaviors.

In this study, the program tested not only the version of the program where youth received a 25-hour a week job and a dedicated mentor, but also a version where a portion of those job hours were used on classes designed to improve social-emotional competencies through the use of cognitive-behavioral principles. The idea here was to test and see whether the jobs/mentors alone would lead to significant crime reduction, or whether those outcomes might be strengthened by even more emphasis on “soft skills” that might help youth not only at the jobsite, but also in other situations, including those where they might commit crime.

Unfortunately, while the evidence suggests that the program overall did slightly reduce arrest rates for participants compared to the control youth (around 4 fewer arrests per 100 youth), neither version of the program really stood out from the other in terms of effectiveness. In fact, it appears that the only type of arrests the program impacted at all were violent crime arrests, which were reduced slightly more for those in the SEL version of the program (compared to the control group), but at a level that was barely statistically significant. All other types of arrests remained essentially the same for both program types and the control group.

Other programs may benefit from trying this type of multiple model design when evaluating their services. For example, is a school-based mentoring program effective because of the mentor spending time teaching academic skills or is a mentor simply showing up and having some fun with the student something that breaks up a challenging school day and helps the youth feel more positively about being at school overall? Asking some mentors to focus more on academics while encouraging others to focus on fun and play might show a difference in impact. Comparing those groups of mentors to another variation that combines mentoring with dedicated tutors might also highlight different ways such a program could work to maximize both educational progress and other personal growth. In the case of One Summer Plus, this multi-model design helped them understand some basic things about how their services might facilitate change—they now know that the SEL component might not be the main driver of program outcomes, which now allows them to focus on other aspects of the program that might be most influential for those gains on violent arrests. The evaluators also collected other data that helped rule out other theorized pathways of change, such as noting that school records did not indicate that the program changed much around the youth’s academic engagement. Those violent arrests kept dropping well after the program ended, so it can’t simply be the direct influence of mentors or the jobs eating up time that might have been spent getting into trouble.

So, what was the aspect of the program that reduced violent arrests? The authors, conclude that the exact causal mechanisms at play here are still not fully explained, but they do wind up speculating about a few things, including a return to the topic of SEL.

2. There may be more than one way to get at SEL growth.

While it’s not surprising that an intervention designed to give youth from disadvantaged communities a job over the summer might have reduced youth violence during that time period, what is surprising, as noted above, is that the reduction in violence not only extended beyond the employment period but actually seemed to grow stronger over time. Initially, the similarity in outcomes between the “jobs” and “jobs+SEL” versions seemed to indicate that SEL growth was not a contributing factor. But what if that’s not the case? What if there are some sneaky ways of growing SEL competencies in youth that perhaps evened out these findings?

One of the big trends in recent years is to infuse the curricula and activities from other evidence-based educational and mental health interventions into mentoring programs. We have written about this practice here at the NMRC and viewed broadly, it’s a mixed bag in terms of success. In some cases, doing formal SEL, mindset, or other curriculum-based work in the context of a mentoring relationship seems to produce stronger results than just a “friendship” based approach might have. But the research also indicates that mentors and youth often struggle to fit in these “add-on” activities to the work they are doing together and that mentors can find it challenging to learn and implement a whole new set of skills in addition to just building a positive relationship. Sometimes that “add-on” programming can even feel like a burden or distract from the core work the mentor and mentee are doing together. In the case of One Summer Plus, the program decided to see if a staff-led SEL curriculum could “boost” the outcomes of the job/mentor combination that the program usually provided. So around half of the youth in the program took 10 hours a week out of their work time to focus on SEL skill development. Sounds good on paper, but as noted above, it produced essentially no differences in findings for crime reduction among the two groups of program participants.

The authors of the paper reach an interesting, although untested conclusion about why this is after eliminating several other plausible reasons: The work with job mentors in the non-SEL cohort built SEL skills and competencies effectively too, giving youth who remained at the job site a fairly equivalent boost in their SEL skills. The authors explain that having a job where youth spent 25 hours a week problem solving, managing conflicts, organizing their time, communicating effectively with others, and collaborating in a team environment, all with the help of a mentor, may have built SEL skills as much as the dedicated curriculum. Although this was untested in the evaluation, it seems more plausible than the explanation that the SEL curriculum and delivery was wholly ineffective, considering that the program was using a manualized set of materials and other mentoring efforts utilizing cognitive-behavioral principles have shown effectiveness.

This has implications for how mentoring programs approach things like SEL skill development. The natural inclination of practitioners is to run out and find a proven SEL curriculum and then ask mentors to implement that or ask youth to take time away from (or in addition to) their mentoring time to work on the SEL skills too. But the case of One Summer Plus hints that perhaps you can do more of that SEL development within the context of normal program activities than one might think. This may be especially true of programs that provide jobs or other similarly challenging contexts and activities to mentees. By giving the young person a job or other challenges that will test them in new and unexpected ways, mentoring programs may provide valuable on-the-fly learning environments that, with the help of mentors, will allow youth to build SEL skills and apply them in the real-world. Mentors can guide and step in if the youth gets in over their head with a challenge, but there is something to be said about developing these skills organically in the real-world settings where they will need to be applied, rather in doing it in a separate, formal learning environment where skills are taught using a manualized curriculum in a vacuum. That may be what happened with One Summer Plus. After all, SEL skills are the skills of human interaction and there is no better place to build and refine those than a fast-paced work environment—especially with a caring, supportive mentor there as a guide and a backstop if problems arise.

So, the next time you are thinking about how to do focused work on some topic, especially universal skills like those focused on in SEL, within your program, remember that the activities matches are already doing might already a space where that growth is happening or could happen with just a bit more intentionality. That may be easier, and perhaps as effective, as a grafting some other regimented intervention onto your program. But, as One Summer Plus did, the key is to test to see if the idea worked or not.

3. Money matters, especially to those who don’t have enough.

The other possible explanation speculated about by the evaluators of One Summer Plus as to how the model influenced arrest rates really sticks out in terms of its blunt practicality: These neighborhoods were so impoverished that even the income from this part-time, minimum wage job was plausibly such a shot in the financial arm that it relieved some of the desperation and stress that these youth and families were feeling. The authors speculate that:

“The program provides a relatively large income shock, averaging $1,400 in neighborhoods where one third of households are below the poverty line and median income is about $35,000. Additional income could change criminal behavior directly or increase parental supervision by reducing how much parents need to work away from home.

It’s troubling to think that an amount of money that small would have that profound an effect on something like criminal behavior and violence in a community. And it’s worth noting that while violent crime arrests, such as for assaults, appeared to be reduced, the rates of other types of crimes that might be financially motivated were potentially unchanged by the program. But it is not outside of the realm of possibility that a little extra money might relieve a lot of stress on a family, allow for some other enrichment opportunities for youth, and change things like hopefulness and goal setting that might have an effect on criminal behavior. There is a growing movement among economists for ideas like Universal Basic Income (UBI) that might provide people in a community with some relatively minor supplemental income as a way of meeting basic needs, reducing stress and anxiety, allowing for more prosocial engagement opportunities, and generally keeping the peace. The authors of the study do note that such infusions of cash can also potentially have negative consequences, such as increasing the ability to buy drugs or alcohol, which might increase criminality. But it does seem quite possible the money earned from these jobs helped these youth and their families not only over that summer, but, when combined with the work experienced gained, may have also led to future employment and earnings stability based on a resumé with some stronger work history and references on it. 

Ultimately, the authors conclude that the growth in SEL skills, combined with the direct and future earnings of the youth produced the apparent impact on violent crime arrests. But it’s remarkable to think about just how much that small amount of money may be able to change the lives of those in poverty. The idea of providing income supports is intriguing enough to have been added to the platforms of many political candidates in recent years, including several in the 2020 Presidential Campaign. Only time, and additional research can answer questions about whether things like UBI are worthy tools to employ alongside or in combination with other interventions. But it’s also worth noting that the program spent more money administering the program per youth ($1,600) than it spent on the wages youth earned through their work. Given the discussion here, it would be nice to see even more of those funds make their way to the youth and families who so very clearly need it, rather than supporting administrative staff who most likely aren’t in those challenging circumstances.

Not every mentoring program is in a position to give youth a job that changes the financial fortunes for a family. But most are in the position to provide some help in supporting a family to meet basic needs, either by referrals to other organizations or through other opportunities. Remember, sometimes the best thing your mentoring program might do for someone to improve their circumstances isn’t the mentoring.

4. Think carefully about whether self-report outcomes or program records are the best way to express the benefits of your program.

As one last little side note about this study, the authors make several interesting points about the tension between tracking outcomes from self-reports from program participants vs examining official records—in this case, arrest records and school data. The NMRC Research Board recently added guidance around working with both of these types of records to its Measurement Guidance Toolkit because, as the authors of the One Summer Plus study note, records offer some real advantages. Most importantly, they avoid some of the biases that can come when asking youth to report about their own behaviors, especially negative or illegal behaviors. In the case of One Summer Plus, youth may be reluctant to accurately report their violent or criminal behavior because they don’t trust that the program will keep that information confidential. They may also be worried about losing out of their job and the potential future connects that provides if they admit to wrongdoing. They may also have really enjoyed their time in the program or want to see the program succeed, and may downplay their true behavior to avoid harming the program itself. Importantly, none of these dynamics may apply to those in the control group, thus potentially leading to a finding of more arrests than those in the intervention group that is due to reporting biases rather than participation in the intervention per se. For all these reasons, One Summer Plus decided to examine official records instead.

But the authors are quick to note that records have their flaws too. Records of all types can have inconsistencies, errors, and missing information, no matter how diligently they are collected and recorded. In the case of criminal records, the authors note that only about half of all violent crimes are even reported and, of those, only half result in an arrest. So, if arrest records are your source of outcome data, it might be the case that those records are capturing only a quarter of the actual violent criminal behavior that is happening in a community. The only saving grace in this case is that those records for both treatment and control groups would be similarly depressed, meaning a difference between the two groups as a result of the program would still be valid at some level. But neither option is ideal.

Programs should give real thought as to which types of measures—self-report or official records—do the best job of capturing the desired goals of the program. Ideally, an evaluation would look at both, but that can be time consuming and expensive, and contradictory results can leave stakeholders confused as to what was actually achieved. (The other thing to note here is that the evaluation also broke down arrests by type rather than aggregating all arrests, which allowed them to detect the larger impact on violent crime, specifically—without separating out those arrest types, it may have seemed like the program achieved almost nothing. Ideally, these types of breakdowns are specified at the start of the study so as to avoid “massaging” data, even if unintentionally, after the fact in ways that favor configurations indicating positive program effects.)

The NMRC encourages practitioners and program evaluators to look at the new guidance in the Measurement Kit on using records data, as these sources may offer more reliable and nuanced ways of thinking about outcomes. Information on truancy records, disciplinary referrals, and grades, as well as juvenile offending records, are covered in these new additions.

For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources" section of the National Mentoring Resource Center site.

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the Crime Solutions website.

In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “No effects” (that is, a program that has strong evidence that it did not achieve justice-related goals).

1. Before one can evaluate, one has to clarify how to operate.

One of the interesting aspects of this evaluation is how much time and energy was spend developing an operations manual for the program that more tightly specified the intervention and programming. One doesn’t often think about manual development as a part of program evaluation, but the reality is that it can be impossible to measure the impact of something if that something is ill-defined and being done in a variety of ways. So it was imperative to the outcome evaluation that these mentoring groups be led and managed in similar ways across schools and even within schools. This reduced variability in how the program was being implemented and ensured that the evaluation was looking at one core intervention, albeit with some flexibility.

Taking the time to develop and share the manual and implementation materials also had the side benefit of giving the program some needed infrastructure that can carry the work forward even if various program leaders and key stakeholders leave. The development of a policy and procedure manual that governs how the program is implemented and clearly spells out policy choices is a critical aspect of program sustainability. It can also support program replication and dissemination. In fact, we applaud Project Arrive for making their manualized and standardized materials available to other practitioners on a publically accessible website. As noted in the evaluation, this site is visited by as many as 1,300 visitors a month from all over the world—many of whom are using the materials to develop their own versions of the program or at least incorporating key ideas. That counts as a major resource for the group mentoring field. And none of it would have been possible if they hadn’t taken the time to put all those materials online.

Programs who haven’t taken the time to fully develop a policy and procedure manual can always do so using the template provided in the Resources section of the NMRC site. Between that template and the Elements of Effective Practice for Mentoring that Project Arrive used as a touchstone, practitioners have everything they need to codify and tighten the implementation of their program in a formal manual—regardless of whether an evaluation is imminent.

2. Group size and composition might rally matter in these programs.

There are a few hints in the evaluation that group size was particularly important in relation to results. Smaller groups tended to rate both their relationships with their mentors and their group dynamics more favorably than larger groups, and those two results were correlated with positive outcomes on academics and resiliency in the study. Larger groups might be harder for mentors to manage and might prevent all members from fully participating due to time constraints or feelings of shyness by mentees when faced with speaking to a larger group of peers.

The number of mentors also appears to have mattered here, with youth reporting many positive benefits of having co-mentors for each group. These co-mentors often brought different skills and temperaments to the role, meaning that youth had access to a greater range of knowledge and different personalities that might be a good fit for them. Having two mentors also meant that the group could still meet if one of the mentors was absent, something that contributed greatly to the low number of missed meetings in the program.

The mentoring field has long wrestled with the question of what the right mentor-mentee ratio is for group mentoring programs. In a recent podcast about Project Arrive, lead researcher Gabe Kuperminc indicated that the most cohesive groups in this study had about 8 youth matched with those 2 mentors—a 4:1 ratio. Although programs can certainly stretch that ratio in either direction and still find success, that 4:1 ratio may be a bit of a sweet spot, where the group is still manageable, and every youth has a chance to fully participate, while still bringing enough diversity of opinion and personality that the groups are not homogenous and boring. Practitioners should think carefully about finding a group size that works for their mentors and should strongly consider having co-mentors, or even 3 or more mentors for a group. Multiple mentors may mean less chaos, more options for a close mentoring relationship, and fewer missed sessions.

3. “Curriculum with creativity” may be essential to avoiding a cookie cutter mentoring experience.

One of the interesting side notes in the evaluation report is that groups seemed to find their groove when deviating, as needed, from the prescribed curriculum of the program. Now, the curriculum itself offered a lot of choice, where mentors and youth could choose their discussion topics. So that offered some flexibility and customization right there. But some mentors went further by introducing other activities or even setting the curriculum aside for more organic conversations when warranted. For example, some mentors noted that there were times when students would arrive at the meetings upset about something that had happened in school, or in the community or on the news. These mentors recognized that what their mentees needed was a chance to process, to vent, to share with each other and to hear each other’s voices. They assessed that sticking steadfastly to the set activity for the day would not meet the needs of students and made a decision to go in a different, but more meaningful direction for that day.

In the aforementioned podcast, Dr. Kuperminc describes this deviation as “curriculum with creativity” and argues that whereas all group mentoring programs need some kind of activity-driven curriculum to guide the groups, that there are plenty of times when deviating from that curriculum and meeting youth where they are at is the right thing to do. This has the potential to prevent youth from feeling as if they are being treated like widgets to be manipulated and engages them in meeting immediate needs and processing challenging topics in a supportive, peer-focused environment.  

There were other things that groups did to customize the experience. For example, each group was free to develop little rituals and traditions that were done each time they met. These rituals contributed to the feelings of “family” that many participants spoke of. They offered not only a way of grounding the group in routine, but they also gave groups some control and autonomy over how they met and the way they ran their groups.

Unfortunately, not all mentors were adept at deviating from the curriculum in meaningful ways, and many struggled with group management and situations where going “off script” may have been warranted. In fact, the evaluation report posits that some outcomes could be strengthened with more training and supervision for mentors, which could not only strengthen the facilitation of the groups, but might also help mentors know when to deviate from the “script” and engage youth in more open conversation and sharing.

4. Improving resilience is an outcome with an ambiguous payoff.

Perhaps the most impressive findings in the report are those related to resilience outcomes. Project Arrive appears to have produced some really meaningful changes in several external areas that could boost youths’ resiliency: perceptions of school support and school belonging, school participation, and caring relationships with prosocial peers. The program was less effective with internal resiliency measures, with only problem-solving coming out with evidence of a strong positive benefit. There was even a hint that external assets were best explained through the relationship with the mentor and that internal assets were best explained by the relationship with peers in the group, suggesting that these programs are about more than just an adult-youth relationship. These groups appeared to get kids interacting with one another in ways that built resilience too.

But evidence of impact of the program on other outcomes was not so great. There were essentially null findings around juvenile justice involvement and youth grades. One might wonder why these things were not improved if the youth were making such great gains in these resiliency areas. The issue may be that resiliency is a process, not a stepping stone to other improvements. It’s a process that is activated, when something bad happens to a child and they either bounce back... or don’t. Ideally the protective factors (like the ones seemingly built up by Project Arrive) kick in in these moments and keep youth from sliding down the rabbit hole. But one can’t predict when and how those resilience factors will be called upon. The resilience assets built by the program may have helped some youth tackle immediate challenges at school. Yet, for most of the mentees, these built up factors mayl only become useful at some point in the future, when they have a negative experience where they need to draw on their assets. This highlights the strange contradictions that can happen in mentoring programs. Project Arrive may have not been doing the right things to immediately impact grades or delinquency, but they appear to have strengthened assets that can last these youth well into their future. We might never know the full impact of that boost in protective factors (although a longer-term follow-up of the program could certainly help in that regard). But it seems likely from the effect sizes reported here that they are quite meaningful.

For those who wish to learn more about the importance of viewing resilience as a process and not a set outcome, see this thoughtful blog post, also by Dr. Kuperminc.

For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources" section of the National Mentoring Resource Center site.

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the Crime Solutions website.

 In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “No effects” (that is, a program that has strong evidence that it did not achieve justice-related goals).

1. A commitment to evaluation over time can have tremendous benefits.

The commitment of Citizen Schools to program evaluation is very impressive, with many studies over the last two decades being referenced for this review—far more than the NMRC typically sees when doing a review for Crime Solutions. It’s not an exaggeration to say that Citizen Schools has been continuously evaluated since 2001, with several cohorts of youth tracked not only through their time in the program, but also into high school through the completion of their senior years. These evaluations examine program implementation and strengths and weaknesses of the program model, as well as student short-term gains and longer-term trajectories.

This is a brave choice for a program, especially in terms of tracking the long-term outcomes of participants well after their time in the program. There is no guarantee that the work of mentors and other caring adults will resonate for youth over time, as the wisdom of a mentor and lessons imparted may fade over time and in critical moments. But Citizen Schools should be applauded for testing to see if the work they do in middle school has an impact on the high school experiences of the youth they serve. The program is designed to not only address immediate academic needs through an extended learning time approach, but also to provide enrichment, skill-building, and future planning that one would expect might change the long-term academic fortunes of the youth served. The only way to find out is to collect the data.

The drawback comes when the results are less than expected, when all the life that happens in the high school years washes away some of the growth that happened in the program. The two major evaluations here paint a pretty different picture, with the evaluation of the original Boston site showing evidence of some meaningful benefits for participants’ academic trajectories and the more recent evaluation of sites that had expanded out nationally showing little to no discernible long-term impact for the full sample.

But it is also clear from these studies that Citizen Schools is a data-driven organization that has made several changes to their practices and implementation over the years as the annual data from these evaluations came in. Readers are encouraged to read the evaluation reports that informed this review, as they offer a good example of a program telling its story and making changes over time through continual evaluation activities. And that early evaluation provided the evidence they needed to expand their model to other cities, which is but a dream for many providers who want to take their work to scale. So regardless of the findings of their impact evaluations, Citizen Schools deserves a lot of credit for doing this level of process and long-term outcome evaluation in the first place.

2. Bringing in a “second shift” isn’t easy.

Because Citizen Schools included process, or implementation, evaluation as part of that ongoing research work, they have been able to learn a great deal as to what works in their model and where the sticking points of implementation may lay. Practitioners who are developing similar programming can learn a lot about the challenges of engaging a “second shift” of adults to come in and pick up where the first shift of teachers and school personal leave off. Arguably the biggest hurdle in making that work is to coordinate the efforts of Citizen Schools staff and volunteers with the content of the school day and the culture of the school. At some of their sites, this coordination was noted as a major challenge, but the seemingly more successful sites found a way to share information with the Citizen Schools staff and those teaching the apprenticeship classes. This ensured that homework help was in accordance with how the subjects were being taught during the school day and that information about particular students’ struggles was shared so that they could receive some individualized support.

This “overlap” of shifts also allowed the program to mirror the behavior and discipline policies and procedures of the school. So, for example, if the school was implementing a behavioral curriculum that set up rewards for good behavior, the program could also offer that and give youth consistent messages about their behavior.

But it also sounds like managing student behavior was a frequent issue at many of the sites as noted in participant surveys. One can imagine that a staff comprised mostly of younger AmeriCorps volunteers and a group of volunteer mentors might struggle to manage large groups of youth in an afterschool setting where the normal disciplinarians of the school day are not around.

There were also issues related to staff turnover and the model actually requires Teaching Fellows to transition out of their role after two years. This means that the people responsible for running the program and who have built relationships with parents and figured out how everything works best in a particular setting simply move on after two years of building up that institutional knowledge. That’s a lot of experience and relationship walking out the door. One wonders if the model might have more stability if those Fellows were regular paid staff who could stay in place over time.

But, as one of the evaluation reports notes, Citizen Schools learned all these things as a result of their process evaluation work and, as the program expanded nationally, they used this information to codify the requirements and structures of the program in ways that appeared to improve implementation over time, if not the overall impact results. So once again, the lesson is that one can’t learn lessons if one isn’t looking for them in feedback and data.

3. If you build it, can it be maintained?

The most recent evaluation report noted that sustainability after the initial seed money implementation was challenging for many schools. This is not surprising as the Citizen Schools model prioritizes resource-deficient schools and communities, with the idea that if they can tap into the strengths in that community they can build a lasting programmatic infrastructure.

But that seed money notion is always harder to make work in real life and the evaluation makes it clear that coming up with the next round of funds to keep the program going was a common challenge. One of the main challenges was the turnover in school or district leadership, which often saw program champions move on and left the program without key advocates who could make sure it was included in future budgets or who could raise funds from different sources. Practitioners working on a start-up grant would be well advised to continue to find allies and advocates who can champion the program, as there is no guarantee that the original supporters will remain in the mix over time.

4. The mandatory vs opt-in debate is challenging.

The original iteration of Citizen Schools in Boston was offered as an opt-in option for youth who were interested in the offerings of the program. This approach often resulted in a strong critical mass of youth participants who really wanted to be there. It also likely kept the program at an appropriate size and scale for the local resources available.

As Citizen Schools expanded nationally, it seems that they decided to make their model mandatory for all youth in particular grades. This likely facilitated stronger participation and buy-in from the school staff and parents. Yet, it also meant that some youth were mandated to be there when they would have rather been doing something else with their time. Anyone who has worked with middle schoolers knows that they love their first tastes of autonomy and greatly value their independence and free time. From the feedback gathered in the evaluation, it sounds like this was an issue for some of the sites, as youth resented being held at school for several more hours, even if they were likely getting some needed help out of the experience.

There are examples of other programs offering “whole grade” mandatory models. For example, the Peer Group Connection program led by the Center for Supportive Schools employs a model where an entire freshman class is mentored for the duration of their first year of high school. But those activities happen during the school day and are integrated into the freshman experience. It seems like that dynamic changes once the clock hits 3:00 and students start thinking about life outside the school walls. So while there are good reasons to go with a mandatory “whole grade” participation approach, it can have downsides that practitioners need to be aware of, especially when working with older youth who have rich lives and a sense of autonomy to nurture.

5. High school “access” work can be good practice for college access work.

One of the striking things about the original Boston-based evaluations of Citizen Schools is just how much effort the program put into steering participants to what the program termed “high-performing” high schools. In fact, one of the major metrics they focused on was how many alumni went to and persisted in one of these schools. And if you look at how youth were supported in this transition, it looks a lot like the work of college access programs: providing youth and parents with information about various high schools they could apply to, touring schools to assess fit, helping with the application form and financial aid considerations, etc. While the evaluations of Citizen Schools did not track cohorts of students into college, one wonders if practicing that selection and application process for high school built skills that proved handy when applying to higher education.

Now, not every part of the country has a high school system that operates with the same type of choice that Boston’s odd mix of private schools, parochial schools, charter schools, and normal public schools does. Students there have a lot more choice and mobility that do students in many other cities. But teaching these skills about how to choose and apply to the right school may be tremendously valuable, and it’s a shame that the evaluation didn’t look at whether those skills were helpful years later when youth were out of the program but doing something similar in the collage application process.

6. It’s always good to read honest discussions of research limitations and biases.

One notable thing about the two main evaluation reports of Citizen Schools noted here is just how clear the authors were in calling attention to the potential for false positives, biased samples, and other limitations of the research that may have clouded the findings. Any good evaluation report will have a limitations section, but those can sometimes downplay the severity of the issues or gloss over flaws in data collection or analysis. But that was not the case here.

The 2010 study by Arcaira and colleagues notes that there is a strong possibility of hidden bias in the results, mostly caused by the fact that those youth who opt into Citizen Schools may be different than those who don’t in meaningful ways. This possibility threatens the validity of even their matched comparison sample. It’s also worth noting that when a program participant dropped out of the study, they were replaced by a student who did not, meaning that the program did not endeavor to offer support to program drop outs, focusing instead only on those students who stuck with it. Once again, this might mean that they have strengths or supports that other students do not, but that are unaccounted for in this design. Non-participants in the comparison group who dropped out were also replaced by others who persisted. The report authors ultimately conclude that these and other factors may have overestimated the impact of the program, or at least confused things to the point where the validity is in question. And for an evaluation where many of the positive results were of small statistical significance, that’s a real issue.

The 2016 evaluation by Abt Associates offers a similar honest assessment of validity threats. It notes that data around participant experiences was not collected in a fully structured way and that not all youth or staff answered the same questions or participated equally in the data collection. The evaluation also notes the general research concern that most of the participant reports were done via self-report surveys, which can be positively biased as respondents give answers that are socially acceptable or “nice” but not necessarily accurate (e.g., program participants may be prone to report overly positive outcomes to help the program look good or even to cognitively justify their own time investment in program participation).

The Abt evaluation notes another issue that impacted their ability to tell a full story: variation in implementation across sites. The authors stress that Citizen Schools sites varied considerably in their implementation. Some of that is due to local customization that is entirely appropriate to the model and the available local resources. But much of it also appeared to be the result of implementation challenges, staff turnover, dwindling funds, or a poor fit between the program and the school conceptually. This leads to an evaluation context where sites are doing potentially radically different things while the evaluation tried to say something global about the “model.” If that model varies too widely, you wind up comparing apples to oranges to the whole fruit basket. It’s hard to say whether the Citizen Schools model was effective when there was so much variation in what youth were offered and received.

But it’s refreshing to see evaluation reports that honestly describe these concerns. Practitioners should always insist that their evaluation reports provide similar context around the strengths and limitations of the evaluation design and available data.

For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources" section of the National Mentoring Resource Center site.

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the Crime Solutions website.

In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “Promising” (that is, a program that has strong evidence that it achieved some of its justice-related goals).

1. It always helps to provide some follow up to ensure your initial success is maintained.

One of the interesting aspects of the Bottom Line model is that in addition to providing robust college planning and access services, they maintain a presence at several regional colleges and universities that tend to be the types of schools they often encourage youth to attend. These higher education institutions are well-aligned with the clients of Bottom Line as they offer a good combination of relative affordability, low dropout rates, and good academic reputations. And while some of Bottom Line’s outcomes on college acceptance and persistence beyond freshman year are presumably the result of helping students pick a good fit in the first place, the fact that they still have access to Bottom Line counsellors and supports even after they get on campus may well be a major reason why their college retention results look better than those of other services. Although the bulk of the work done in their model is done during the core application and acceptance process, the program seems to have added a valuable secondary component that helps ensure that their mentored youth get ongoing support for making progress toward the ultimate goal of college graduation.

We have seen other examples in our reviews over the years of programs using mentors or ongoing relationships to help maintain progress youth have made under a core set of services. One example can be found in the National Guard Youth ChalleNGe program, where youth are offered community mentors (that they help identify) as a way of maintaining the positive path they are on when they leave the residential portion of the program. Research on the program indicates that youth you stick with that mentor for several years after the initial residential experience tend to be maintaining their positive direction in a number of ways compared to youth who did not get a mentor or who stopped seeing them after some time. This is a good reminder to programs that, when possible, a little follow-up work can go a long ways after your initial services are complete. Other programs may be able to produce similar long-term trajectories through the use of alumni groups, occasional program “check-ins”, and ongoing training and learning opportunities once youth leave the core services.

2. Fidelity of implementation pays dividends.

One of the most helpful aspects of the evaluation work done by Carr and Castleman on behalf of Bottom Line is that they examined not only outcomes of the program, but also factors that may have led to those outcomes. This is particularly helpful when trying to understand why the program seems to have outperformed other college access programs and services in which youth in the control group may have participated. Among the factors that the evaluation looked at was the frequency and consistency of meetings with students. Most of the advisors in the program met with their students an average of almost once every month for 15 months, with a surprisingly high percentage of those being in-person office visits. We have reviewed other college access mentoring programs that offered a lighter touch with more emphasis on phone and text check-ins. Bottom Line seems to have emphasized the consistency and follow through of their meetings with students throughout the process and, not surprisingly given this, the students rated the influence of their advisors very highly. This might not have been the case if they hadn’t met with such regularity and at key points in the application process.

The consistency of service delivery was further examined in the evaluation with an analysis of the results of each advisor individually. This was designed to see if some advisors were more effective than others and if the overall good results were being driven largely by one or two “superstar” performers. Turns out that 19 of the 20 advisors had a net positive estimated outcome for the students they served compared to the control group students. (One has to wonder what the post-evaluation conversation was with the lone advisor whose students fared considerably worse.) Even more impressive is that in Bottom Line there is very little deliberate matching of advisors with youth who might be a “good fit” based on shared backgrounds, gender, or interests. The program essentially “randomly assigns” youth to any and all of the mentors, something they could only do if they had confidence that each advisor was able to faithfully walk each student through the activities and could form a strong working relationship with a wide variety of participants.

We put a lot of emphasis on matching mentors and youth based on surface-level similarities and interests in the mentoring field. But programs like Bottom Line that have a very clear set of goals and a structured approach to getting the young person from “point A” to “point B” should arguably really emphasize the consistency of the experience from mentor to mentor and work to ensure that regardless of who they are matched with that youth will get a positive relationship that hits all the critical tasks in their work together. Bottom Line likely achieved this fidelity of implementation through strong advisor training and by monitoring the progress of each match through the program. But now that they know their model can be delivered by many types of individuals, for a diverse array of students, with very little inconsistency, they will have an easier time replicating these results in other locations. They have learned that for their program the process is, in many ways, as important as the people.

3. Sometimes, all you need to do may be to change how young people are thinking about a situation.

In addition to finding that the program was producing positive results in most of the areas examined, the Bottom Line evaluation shed some light into how the program was influencing young people. Most notably, Bottom Line students were much more likely than control youth to indicate that affordability was a major factor in their application decisions. The Bottom Line model has several stages where cost-information is highly prioritized: when considering potential schools to apply to, when reviewing acceptance information and financial aid offerings, and in ultimately deciding where to attend. Other college access models also review these types of considerations, but the information provided by control youth indicated that this financial review was less emphasized than it was in Bottom Line and that they ultimately didn’t consider affordability as much when applying and enrolling. By getting these students to think a bit more deeply about the intersection of school quality and school cost, they helped them prioritize choosing a school that they could afford and attend for all four years of college, something that many students, especially those from lower-income backgrounds as in Bottom Line, tend to really struggle to make into a reality.

Other mentoring programs should think about what information might be critical to helping mentees reach their goals within the context of their programs. They may find that one of the best ways to achieve those goals is by encouraging mentors to focus on providing key information and helping youth enhance their ability to think about important factors in their decision-making. The outcome of a decision can only be as good as the thinking and information that informed it, and Bottom Line seems to have realized that their theory of change for students has some clear points where the influence of a mentor helps youth plan for their futures more effectively.

For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources" section of the National Mentoring Resource Center site.

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the Crime Solutions website.

In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “No effects” (that is, a program that has strong evidence that it did not achieve justice-related goals).

1. Group mentoring combined with personal reflection and writing, seems like a promising approach to working with youth with justice system involvement.

Although the Arches program struggled to achieve some of its goals related to recidivism, its group mentoring approach is one with great appeal for working with populations that have experienced juvenile justice involvement and adverse life experiences. There is a hope that a group setting, especially a supportive and relaxed one, might make participants, especially boys, more open to sharing their experiences and feelings than might be the case in a more intimate one-on-one model. It is also assumed that the group setting will allow youth to not only learn from the shared life experiences of others, but also will facilitate peer support and new relationship formation, something that might be particularly valuable to youth looking to leave old habits and friends behind. The group environment might also allow reticent youth to ease their way into a mentoring relationship, to see how the adults in the space are going to act before committing, something that is hard to do when thrust into a two-person mentoring relationship. Lastly, it is hoped that a group environment will allow for more fun, a wider variety of activities, and a sense of comradery among the participants, resulting in a more enjoyable program experience overall.

In fact, we have seen other programs working with a similar population use a group approach. In particular, the Reading for Life program, which used small groups built around a book club format to reduce system involvement with 13-18 year olds. That program is rated “Promising” by Crime Solutions, suggesting that a group approach can be at least moderately successful in working with similar youth. In fact, Reading for Life also has an extensive journaling and peer-sharing component, not too dissimilar from the Interactive Journaling used at the Arches sites. These journaling exercises give youth something to contemplate between each session that reinforces key learnings and understandings from the program; they also give the mentoring sessions something to focus on that put youth voice at the center of the conversation and get youth opening up in ways that could easily be challenging without a prompt. Journaling between sessions, in addition to helping youth make meaning of what they are learning through the program, provides the backbone of their interactions and the connections they make with other youth through sharing and listening. 

The evaluation here suggests that the group mentoring model was implemented with varying levels of quality across the Arches sites. Some sites appear to have had very experienced mentors who knew how to facilitate a group, when to step in and out of conversations, and how to create a safe and open culture of sharing. Other sites are reported to have struggled with mentors who lacked facilitation skills or who tended to dominate the conversation with their own voices rather than teasing out the voices of the youth in the room. But in all, it seems like most participants valued the group approach, suggesting it is a good foundational structure for the work of the program. This group format is supplemented by ad hoc one-to-one mentoring for youth who want more support. But in general, Arches seems like yet another good example of a program using a group mentoring model, supported by supplemental journaling, to effectively support juvenile justice involved youth. The program’s outcomes may be enhanced by strengthening the facilitation of the groups, something noted in the conclusions of the evaluation itself.

2. Credible messengers, well supported, can also be critical to working with these youth.

One of the things noted as meaningful by youth participants was that the program placed great emphasis on recruiting mentors with backgrounds and experiences that, to the degree possible, mirrored the backgrounds of the youth being served. The evaluation notes that this similarity of experiences, culture, community, and ethnicity helped place youth at ease and let them feel like mentors could relate to their stories, opinions, and emotions.

However, using mentors with “lived experience” also came with some challenges. According to the report authors, some mentors lacked training in key skills, such as group facilitation, utilizing change talk or cognitive-behavioral principles, responding to trauma, and getting the most out of the journaling curriculum. Even though these were paid positions, not volunteers, many of the mentors seemed to struggle with basic aspects of group management and conversation facilitation. The mentors, it seems, often were exactly the right people to relate to the youth in the program, but may not have known how to build effectively on that “credible messenger” status. The program offered a number of supports in this area—in fact it contracted with an organization just to provide mentors with in-depth training and technical assistance on the skill gaps noted here. But that training was inconsistently utilized and not always adhered to when provided.

The design of Arches—the combination of group mentoring, journaling, interactions informed by cognitive-behavioral principles, and 24-hour-a-day support—seems tremendous on paper, but it faced challenges when using less skilled mentors as much of the delivery system. These credible messengers brought tremendous skills and relatability to the role, but found it challenging to implement some key aspects of the model. This might suggest the need for more staff involvement in program activities or, at least, more scaffolding provided to mentors in support of their work. It’s unclear why the training and technical assistance offered here was ineffective, but it’s clear that credible messengers may need additional support to be so deeply responsible for delivering what can be a fairly technical and nuanced intervention like Arches.

3. A “family” atmosphere may help participant engagement and reduce barriers to deeper involvement.

One of the clear strengths of Arches appears to be the notion of creating a sense of “family” at many of the sites. This is noted throughout the evaluation and the concept of family may be especially salient for the youth who Arches tends to serve. Again and again, the evaluation noted that participants recognized and valued the family atmosphere at each site. This family-type environment was likely very appealing to youth who were challenged by their home environment or lack of parental involvement. It also may have offered an easy way to develop a new “crew” of friends for youth whose delinquent behavior was facilitated by peers in an attempt to find a sense of belonging or connection to a group. The family environment also may well have boosted attendance and let youth know Arches was a safe space for them.

There were several things that Arches did to facilitate this family atmosphere. The program offered food each mentoring session as not only an incentive for attending but also to build a sense of family. Mentors and other staff were available 24 hours a day, seven days a week, something very rare in mentoring programs and much more akin to the role of a natural mentor. Alumni were encouraged to come back to the program and become mentors themselves or at least participate in the sessions and offer their guidance. The program also offered transportation assistance, referrals to other services, and a number of other features all designed to help mentees feel embraced and cared for. And the group mentoring formally built a sense of togetherness and connection that one can imagine must have felt hard earned by these youth.

Unfortunately, this emphasis on a family atmosphere didn’t have quite the influence one would have hoped as judged by results of the evaluation. The program had retention issues at many sites, with at least some of that reported to be the result of inappropriate youth being referred to the program. But some of it was also assessed to be the result of some youth not feeling like a good fit for the program. Some reportedly found the program to be too “male centric” and not as accepting of girls as they hoped. Others thought the journaling curriculum, which was originally intended for incarcerated youth, was off-putting. Still others just struggled to get to the bi-weekly sessions and wished the program had more flexible scheduling. It’s hard to make a program that appeals to, and works for, everyone. But the family atmosphere that Arches was aiming for seems like a good fit overall for these youth.

4. Think about whether your program has sneaky impacts.

There are several possible reasons that Arches wound up getting a rating of “No Effects” in its formal review. Perhaps none was more important than the distinction between the different, but related, outcomes of arrests and convictions. Arches seemed to have a rather significant impact on conviction rates a year and two years after starting the program. In fact, two years out from the beginning of the program. Arches youth were half as likely to have been convicted as the comparison group. But that’s not the whole story…

It turns out that they were more likely to have been arrested in that same time period. Now, some of this was due to the Arches youth having higher levels of risk to begin with—in theory, they should have been more likely to be arrested if the program never happened. But even when controlling for that, the Arches youth were slightly more likely to be arrested at the 12 or 24 month mark. But not convicted. Actually being convicted was far less likely than the comparison group.

One wonders if being in the Arches program itself didn’t influence the conviction rates for some of these youth. Perhaps being in Arches allowed parents, lawyers, or other advocates on behalf of the youth to argue more effectively against prosecution or conviction. Maybe they could argue that these youth were trying to turn their lives around by participating in this mentoring program and deserved another chance at continuing to improve themselves. This type of “hidden” impact might be at work here, where simple participation in the program acts as a bit of shield, even if the program isn’t necessarily making a huge difference for you individually.

The discrepancy between arrest rates and conviction rates is not explained in the evaluation report. It could be that Arches youth were arrested for less serious infractions that tend to be dropped without criminal prosecution. Or it could be that they had a lot of cases of mistaken identity or false arrests. Maybe they simply engaged in crime that was harder to prove and more easily dropped by overworked district attorneys. But those explanations seem unlikely, as they would require these youth to be still getting in trouble, but have that trouble be radically different than their comparison group of pretty similar youth (who were starting from a point of less criminality in the first place).

It’s unclear as to what’s behind that mystery in conviction rates here, but programs may want to ask themselves if there are hidden benefits for their participants—in the perceptions of others, in how they are treated by systems—that might influence their evaluation results. Because the good news on convictions for Arches is certainly confounded by the fact that their mentees were more likely to get arrested. That doesn’t seem much like the reduction in juvenile justice involvement they were hoping for.

For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources" section of the National Mentoring Resource Center site.

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the National Mentoring Resource Center website.

In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “No effects” (that is, a program that has strong evidence that it did not achieve its justice-related goals).

1. Customization of services for youth in foster care and with disabilities is a must.

There are several examples in evaluations of this program where efforts made to really customize the program to the needs of foster youth, especially those with disabilities, are described. For example, the TAKE CHARGE curriculum at the heart of the My Life model was originally designed to be used with all youth, with the idea being that self-determination skills are helpful to all young people as they approach young adulthood. But the program customized this core idea of the curriculum to focus exclusively on goals related to foster care transition or educational attainment, something these youth tend to struggle with. They also infused specific skills and competencies that would support their goal setting, with revised lessons focusing on skills like establishing “support agreements” with caring adults and how to work effectively with key individuals, such as judges, attorneys, and child welfare professionals. They also allowed for deviation from the set sequence of lessons in the traditional TAKE CHARGE model. Because youth in foster care often are in crisis or are wrestling with challenging situations, the program allowed for earlier delivery of lessons on key skills like problem solving and seeking help before moving into the goal setting work. This allowed youth to address immediate concerns as needed and set a more stable foundation for thinking about goals and setting plans of action.

In addition to customizing the curriculum, the program also placed greater emphasis on who was serving in the coach/mentor role, choosing to emphasize the recruitment of individuals who were slightly older than participating youth and who had been in foster care or wrestled with a disability themselves. This is another example of programs opting to use “credible messengers” when working with vulnerable youth, with the idea being that these individuals can relate to the struggles and life histories of mentees more effectively. By having near peers who had overcome similar challenges in the mentoring role, these youth were also offered a powerful example of what their perseverance might translate into. One can imagine that it’s easier to think about literally “taking charge” of one’s life when the person encouraging you to do so is a living example of what can happen when that commitment is honored. Many of these mentors were program alumni and they were often enrolled in higher education or working on solid careers. Have a strong example of a potential “future self” in the mentoring role surely plays a role in helping mentors and mentees bond and work collaboratively together.

Other small nuances of the program also reflected just how customized the intervention was for these youth—for example having the weekly coaching sessions at the youth’s home or school at a time that worked for them (such as a free class period), an approach that recognized that these youth often had hectic schedules and placement moves and that reaching them wherever they could with flexibility would be important. Those types of tweaks to the services, to the curriculum, and to the background of the mentor all seem likely candidates to have contributed to this program “working” for participants. It’s worth noting that in each iteration of the program, there was very little attrition of youth participants—they likely felt right at home in this program built for them.

2. Self-determination is a powerful tool with working with youth who have a lot of their daily life tightly controlled.

What’s interesting across the evaluations here is that self-determination was used to a few different ends, but had elements of success each time. The initial Powers (2012) evaluation focused on using self-determination to boost transition planning and outcomes, while the Geenen (2013) study looked at educational planning and attainment for a similar (but slightly younger) group of foster youth. The Blakeslee and Keller study (2018) then looked at the longer-term juvenile justice impacts from both cohorts. But in all three cases, it seems like this self-determination approach had a real impact, with several of the studies reported evidence of positive changes with medium to large effect sizes—a magnitude of impact rarely seen in mentoring programs.

We’ve written about self-determination as a key ingredient several times for the NMRC, most notably in the practice implication sections of our evidence reviews on Mentoring Youth in Foster Care and Mentoring for Youth with Disabilities. Readers are encouraged to look at those publications for more detail about the value of self-determination coaching in the context of mentoring. But at the end of the day, self-determination is a form of empowerment. And whether it’s in Torie Weiston-Serden’s radical new idea of a “critical mentoring” approach or the tried and true 5 Cs of youth development, we all recognize that empowering youth to be active and engaged in not only their own future but in contributing to the world around them is one of the core principles of good mentoring. It may take weekly coaching, and a lot of hand holding and trial and error, but the TAKE CHARGE approach seems to really help youth find the right blend of mindset, motivation, and external support to make a difference. 

3. Using prior data as a jumping off point for future program evaluation.

Practitioners should take note of something really clever about the Blakeslee and Keller study. Their longitudinal findings involved only one original data collection point: “time 4” which was several years after youth from the Powers and Geenen studies had exited the program. They were able to build off the great data sets that had been collected during those prior two federally-funded studies by doing a simple survey follow-up with participants and some records checks to find out about subsequent justice system involvement. This allowed them to do a really strong longitudinal investigation of program impacts at a fraction of the cost as it simply built on the evaluation investments that had come before. This is also a great example of researcher collaboration, as not all scholars are as comfortable having someone else build on their data as the example here. But for any program that’s wondering why they should do rigorous data collection or whether the data collected from previous evaluations is worth anything down the road… here is a clear example of two researchers doing something really meaningful and new by building on data collected in the past. Blakeslee and Keller already knew that the program had good evidence of producing some meaningful effects. By doing just another round of data collection they were able to not only show evidence of the program contributing to reductions in criminal justice involvement, but were also able to calculate cost-benefit estimates that can help draw more attention and funding to the model. Their estimates suggest that the program not only pays for itself, but that it could also save taxpayers money if implemented at scale. Hopefully this example can get practitioners thinking about what they could do with their old data and how they might be able to build on prior data collection efforts with some clever follow-up activities.

4. A good example of cost-benefit calculations that others could emulate.

One last thing of note about the Blakeslee and Keller study is that it offers a very detailed explanation of exactly how they calculated the costs of the program, as well as the savings associated with various program benefits. Costs were calculated in a way that would be very helpful to policymakers, especially in calculating the costs to serve a youth in the existing program and separately calculating the costs if the program needed to be started up from scratch. This can help funders and policymakers in deciding whether to fund or expand the existing program or invest slightly more in starting up new replication sites—with the costs of each option clearly articulated, as well as the details of how those costs were determined.

On the “benefit” side of the equation, the authors were able to detail not only the costs of several negative youth outcomes (e.g., a night in jail) but also the savings that the program shows evidence of actually producing based on the sample studied. This allows funders and policymakers to see how the program may impact specific outcomes of interest and clarifies exactly how the program may well benefit taxpayers. They even instruct readers on how to calculate the benefits of this model in other parts of the country by plugging in their own cost estimates for things like incarceration and court costs.

In the end, they estimated that My Life’s benefits exceed its costs by up to three times, depending on whether one is replicating the model or simply funding the existing site. Programs and evaluators they are working with would be well served to review this section of the Blakeslee and Keller report as it offers a stellar example of how to calculate program costs and estimated benefits and how to present them to stakeholders in a way that doesn’t overstate the estimated impact but also makes it clear that further investment in the model may likely be money well spent.

For readers who want to learn more about the Blakeslee and Keller study of My Life should listen to the Reflections on Research podcast from season 1 that featured both researchers describing their study and findings, including that cost-benefit work, in great detail.

For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources for Mentoring Programs" section of the National Mentoring Resource Center site.

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the Crime Solutions website.

In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “Promising” (that is, a program that shows some evidence that it achieves justice-related goals when implemented with fidelity).

1. When “short term, high dose” accomplishes more than one might think.

One of the most interesting aspects of the Youth Advocate Program (YAP) model is that it is fairly short-term compared to many community-based mentoring opportunities—only 4-6 months, when most community-based programs aim for at least a year if not several as a minimum expectation. At first this may seem surprising, given that YAP serves youth who are not just “involved” in the juvenile justice system but t who have violent or multiple serious offenses that have them facing incarceration in a juvenile detention facility. Supporting youth in that situation would seem at first glance to require long-term relationships with caring mentors that help these youth navigate a series of challenges and build a life that hopefully allows them to avoid recidivism.

Well, what YAP lacks in duration, it more than makes up for in intensity. Utilizing a paid mentor position, youth meet with their mentors at least 7.5 hours a week—occasionally increasing that to as many as 30 hours a week if they need extremely intensive help in a time of crisis. That not only provides ample opportunity for the program’s mentors to ensure youth are meeting the expectations of their court-mandated Individual Treatment Plan, it also means they have the time to build a deeper relationship and really get to know what makes a young person tick. The program tries to match mentors and youth along lines of similar interest and geographic proximity, hopefully giving the pair as few barriers as possible as they start to work together, knowing that the time is short.

The fact that this program shows reductions in criminal behavior a full year after exit from the program suggests that it’s possible to facilitate some longer-term changes in behavior from a rather brief relationship, provided that the intensity is high and the work the pair engages in is guided by an overarching plan. In fact, the OJJDP study of YAP referenced this review offers further hints as to how mentors can make these relationships impactful in a short period of time—and which behaviors they may want to avoid at various points.

2. Timing is everything.

While the old adage that “timing is everything” certainly applies to comedy, it also seems to matter a great deal in programs like YAP. One of the interesting aspects of this evaluation is that they focused so heavily on the actions of mentors and what was happening within matches, in addition to examining program outcomes. This is incredibly helpful to the field as it allows us to not only know if the program “worked” (and it does seem to have achieved many of its goals), but also how it may have worked and the connection between the actions of mentors and those outcomes. One key finding in this regard is that youth whose mentors who engaged in more serious conversations at the start of the relationship and more playful and relational activities toward the end fared better in terms of their levels of misconduct when they left the program than mentors who started out more playful before moving into serious conversations. At some level, this bucks conventional wisdom that assumes mentors are better off taking time getting to know youth and having some low-stress interactions to build trust before engaging in serious discussions or talking about behavioral issues. But what YAP mentors seem able to do is start the relationships from a place that is both relational (hence the importance of those shared interests/neighborhoods) and engaging in serious conversation right off the bat. Given that these youth are only in this program in lieu of incarceration, and that the pair has limited time to work together, once can see why mentors would perhaps want to come in with some serious “getting down to business’ right off the bat. One can also imagine that in relationships where the mentor is still pushing serious problem-based conversation deep into their time together that perhaps youth would begin to tune that out or might feel like their mentor only thinks of them as someone with problems.

The most successful mentors here seemed to start with more of a focus on the serious work to be done, but then bolstered the relationship and set the stage for longer term success by ending their time together with more fun and play that let youth know this relationship was about more than the work they had done together, meaningful as that was. Other programs serving youth with serious needs or challenges may want to consider whether they can start addressing some of those tougher issues earlier in the relationship. Perhaps mentors and youth bond together better, in these cases, through the addressing of hard issues, rather than building a “nice” relationship that can get undercut in the youth’s mind when the conversation topics turn more serious.

3. Once again, experienced mentors may boost outcomes.

One of the trends that we have talked about in these “insights” documents before is that there is a clear trend in mentoring research suggesting that programs can boost their outcomes by using mentors who have a teaching, advocacy, or prior youth work background. The idea being that they may bring more advanced skills and a deeper understanding of young people to the role of a mentor. And this evaluation seems to only strengthen that view. This study once again found that youth whose mentors had prior experience as a teacher and higher levels of education fared better in terms of their criminal behavior and engagement in school. The experience and education that these mentors bring to this work may be particularly important in a short-duration, high-intensity program like YAP where they need to establish a good working relationship with the youth in short order and balance the immediate needs of the youth and the court-mandated plan they must adhere to. One can see how experience as a teacher may teach critical skills for that juggling act.

But there was one area where mentors’ education level may have had a downside: Mentors with a higher education level were also more likely to be engaging in serious conversations deeper into the relationship, exactly the action cautioned against in the point above. Now, there may be many explanations for this, but it might be that mentors who have completed more schooling might value education more and be more likely to push their mentees to focus on school. They may also define “success” differently for the youth they are working with. It doesn’t seem that their insistence on serious conversations late in the relationship negated their impact—they still outperformed their less-educated peers. But they did also tend to engage in a behavior that, according to those other analyses, didn’t quite work as well for the youth in the program.

4. What’s the “secret sauce”?

While this evaluation certainly demonstrates the effectiveness of the YAP model, and offers some hints as to effective mentor approaches, it remains unclear exactly how the mentors in this program are able to influence longer-term trajectories for youth from a short, albeit intense, mentoring experience. Given that these youth are multiple offenders and have often committed very serious crimes, one can assume that these youth have been exposed to mentors and caring adult-led interventions long before they arrive at YAP. In some ways, YAP may represent a bit of a last chance with only incarceration left as a form of behavioral correction.

But what made this program stick when others had not? Certainly the intensity of the relationship helps, as does the integration of the mentor into the community and family of the youth, and the blend of play and serious work noted earlier. And having a court-mandated treatment plan looming over the relationship probably also reinforces that this is an opportunity not to be taken lightly by youth.

But we unfortunately don’t have a lot of information about the activities mentors and youth engage in to know exactly how mentors are able to change thinking patterns, long-term behaviors, and other factors that can be real barriers to changing one’s life in the way that successful YAP mentees seem to have. Perhaps it’s just a matter of getting deeper into the life of the youth, something that being a paid mentor spending copious amounts of time with a youth affords. Maybe these mentors have ways of reaching these youth that previous mentors didn’t possess. Maybe the timing of the program and the age of the participants puts pressure on the youth that “this time has to be different.” Hopefully future evaluations of YAP can shed more light on exactly how mentors engage in those serious conversations, how they build strong bonds while doing that, and how exactly those more playful moments toward the end of the relationship leave things on a note that carries forward for the mentee. While this evaluation taught us a lot about YAP, there is still more to learn about this intriguing model.

For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources" section of the National Mentoring Resource Center site.

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the Crime Solutions website.

In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “Promising” (that is, a program that shows some evidence that it achieves justice-related goals when implemented with fidelity).

1. Mentoring as a collective practice.

One of the great questions of mentoring is what configuration of mentors and youth is the right one for meeting certain needs. When is a group approach the right one? Or perhaps a team approach where the youth is matched with multiple mentors at once? Well, in the case of the HOSTS program, they have settled on mentors as a distributed activity where youth meet with a mentor from the available pool at their school, but there is no guarantee that it is the same person each time—in fact they might meet with a different mentor each of the four days of the week. Is it possible for youth to form the bonds we associate with good mentoring when the role of the mentor is played by a different person each time? In the case of HOSTS, the answer seems to be that, at the very least, youth can benefit from this collective approach to mentoring, perhaps because of an emphasis on training that ensures mentors all bring a similar mindset to their work

The HOSTS Program provides two hours of training to its group of tutors/mentors emphasizing their role in supporting the reading and academic development of youth while emphasizing a mentoring mindset—the ability to develop an interpersonal bond and build the youth’s self-esteem and self –confidence around academics and beyond. Thus all of the tutors are trained to be supportive and caring adult mentors contributing to a welcoming learning environment for all youth in the program. There have been other examples in the literature of mentors in programs emphasizing a general set of skills, temperament, and relational approach, rather than unique personality characteristics or qualities that lead to bonding with the youth. Examples of this include the Lunch Buddies program where the person mentoring each student rotates each semester and the Brief Instrumental Mentoring Program, which emphasizes mentors and youth having a strong “working alliance” focused on the task at hand rather than forming a unique and personal bond. HOSTS adds to these examples as a program where mentors can meet with any child because they are all focused on the unique lesson plan of the child and they all bring, in theory, a similar set of skills and approaches, even though they will obviously differ in personality and background.

This structure brings up a few questions worth exploring for other mentoring programs:

  • Does mentoring in your model need to be provided by the same person? Can you leverage the power of community by training and recruiting a diverse web of community members to provide roughly the same experience?
  • How do you create a culture where all adults in the same context adopt a mentoring mindset as they interact with a range of young people? Is it possible in your program context to create an environment where any adult could mentor any student?

The HOSTS program is able to take this communal approach to mentoring because of a few factors that may not apply to other situations: 1) The program is heavily focused on tutoring and the completion of each student’s individualized weekly reading plans. This means that mentors aren’t getting deep into personal issues or challenges where switching mentors would disrupt the “progress” of the match. 2) The meetings are frequently observed by the program coordinator as they are happening. This allows the program to correct mentor behavior that strays from that ideal delivery as it happens, homogenizing the mentoring experience over time. In fact, let’s look at that coordinator role a bit more closely…

2. Program coordinator as active conductor, not distant administrator.

Another feature of HOSTS that allows for that collective mentoring approach is that the site coordinator really acts as the constant for the program, filling a role not terribly dissimilar from that of an orchestra conductor who coordinates and manages the parts of each of the players. In this case, the coordinator is responsible for doing the initial assessments of each youth’s reading strengths and weaknesses, as well as the weekly lesson plans that build on the progress the student is making and adjusts frequently to fill in remaining gaps. This takes pressure off mentors to come up with activities each time they meet with the students. It also means that, in theory, any of the program’s mentors can meet with any child in a plug-and-play approach. That weekly lesson plan drives everything. Needless to say, most mentoring programs do not task one program coordinator with coming up with individualized plans for each and every mentee served, let alone doing that weekly. But here it makes sense—the HOSTS platform makes selecting appropriate lessons easy and even provides access to over 16,000 individual lessons, each of which can address a specific area of need.

But the coordinator role doesn’t stop there. As noted above, they are frequently observing pairs meeting to make sure that the tutors are not only following the lessons as intended, but are also engaging the youth in all those “mentor-like” ways that create a supportive learning culture. In this way, they can make sure that there is fidelity to each student’s plan while also ensuring that the “whole orchestra” sounds good and is hitting the right notes. For most programs considering taking a “collective” approach to mentoring where mentors and youth meet randomly in an unmatched context, having a coordinator be responsible for this level of, well, coordination makes a lot of sense. Someone has to have an understanding of each youth’s needs once the decision is made to take the mentor out of that role. Providing that coordinator with lots of extra training and practical support, as the HOSTS platform does, also makes a lot of sense.

3. Importance of clear mission.

Another strength of the HOSTS Program is its clear focus on reading outcomes. While the program does claim a desire to improving students’ language arts skills more broadly, including reading, writing, vocabulary, thinking, and study skills, there is a very clear emphasis on reading fluency and comprehension. Secondary program goals include improved behavior, attitudes, and self-esteem, but once again, these are so secondary as to not even be examined in this evaluation. This tight focus makes so many aspects of the program easier, to the skills mentors must be trained in to the loose match structure to the engagement of teachers and parents/guardians. Unlike many mentoring programs that have to prepare for all kids of conversations and issues and youth needs, HOSTS gets to focus tightly on doing one thing and doing it well. And with a rating of “promising” that seems to be working.

Although the evaluation hews to that tight focus, it also seems to be a bit of a missed opportunity. It would have been nice to know if that improvement in reading led to other academic gains, of if those feelings of academic confidence that the mentors were trained to bring out perhaps led to improved behavior at school. But the evaluation, much like the program, had a tight focus, and we are left to wonder what else HOSTS can achieve beyond some modest gains in reading comprehension.

For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources" section of the National Mentoring Resource Center site.

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the National Mentoring Resource Center website.

In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “Promising” (that is, a program that shows some evidence that it achieves justice-related goals when implemented with fidelity).

1. Sometimes simpler is just fine.

One of the things that immediately sticks out about the Baloo and You program is how little is asked of mentors. As with any program, mentors must commit to regular, consistent meetings with their mentee, engagement in the type of activities the program expects, and all the other aspects of the program that the developers deemed to be important. But compared to most modern mentoring programs, the premise behind, and approach to, the work with mentees in this program is remarkably simple.

The program theorizes that when children are young (6- to 10-years-old in this case), certain relationships can offer corrective experiences and what the authors of the second study describe as “an enriched social environment.” The relationships in this program thus offer children a chance to experience new things, learn new life skills, and receive role modelling from a caring adult.

While that might sound like what a lot of mentoring programs offer youth, in reading the two evaluation papers associated with this program, one can’t help but be struck by just how straightforward and “under-stuffed” the program is in comparison to many mentoring programs. In this program, mentors spend one afternoon a week with their mentee engaging in joint activities, adapted to the needs of the child, including things like trips to the zoo, cooking a meal together, working on craft projects, playing sports, or just engaging in conversation. There appears to be no set curriculum, no talking points for mentors beyond just being developmentally appropriate, no skills training or other quasi-clinical activities drawn from other evidence-based interventions, no turning mentors into highly trained pseudo-social workers or therapists. The mentor and the kid just… hang out. They spend time together and through engaging in normal, everyday activities, the program hypothesizes that these young children will build critical life skills, such as being organized, problem solving, concentration, expressing empathy, and making good decisions. As one of the papers about the program puts it, Baloo and You is strongly grounded in the idea that learning is a byproduct, not something to be intently focused on and willed into existence through sheer effort and rigid tactics.

But it seems many mentoring programs today have gone in the opposite direction, training mentors in all kinds of developmental theory, the specific steps and actions of tightly implemented interventions, and all manner of “change talk” skills designed to elicit specific changes in the youth they are working with. They transform the simple “caring adult spending time with you” mentor archetype into a quasi-therapist/coach/teacher/social worker, stuffed to the brim with the latest research and clinically-derived tips for moving their mentee from point A to point B. These programs tend not to fully trust that the relationship itself—the simple and fun interactions between a mentor and a child—will actually result in anything meaningful. It is refreshing to read a description of a program like this that trusts that an empathic relationship with someone new is an ideal environment for children to learn and practice skills that will help them throughout their life. Mentors don’t need to be deliverers of anything beyond a fun, caring, and enjoyable relationship in which their mentee learns how to just be in the world, how to be their best self, and how to get along with others.

Now, the mentors in this program do get support from the program staff on how to customize the mentoring experience precisely for the things that their mentee needs to work on. But they also trust that the relationship itself is a sufficient form of intervention, something that is not often seen in today’s competitive funding environment where programs chase outcomes so intensively that the mentoring relationships sometimes don’t look much like mentoring relationships when all is said and done.

Obviously, this approach would very possibly not work well for older youth or for youth experiencing more serious challenges or needing very specific help. But Baloo and You starts young, with foundational aspects of being a healthy person, and trusts that a mentor will help that child just by hanging out and being a role model. As one of the study authors puts it, “in the concept of the ‘byproduct,’ countless indirect paths lead to the finish line.” And as these evaluations showed, these youth got to the finish line, not by getting dragged there by their mentors, but by being given the freedom to be a kid having fun and talking about stuff with a new caring adult. What a novel concept for a field that increasingly seems to be distrustful of simply giving a child some love and a good time.

2. Simple still needs support.

While it’s true that the mentors in Baloo and You were mostly free to just concentrate on being in the relationship and engaging in fun activities, there was more going on behind the scenes than that. As noted above, mentors met with staff who were professionals in education or psychology so that they could emphasize the right things when meeting with their mentees or come up with activities that would be specifically tailored to the needs of the young person. While it’s true that most of the activities mentors and youth engaged in could be described as “everyday activities” that doesn’t mean that they weren’t chosen with some intentionality in mind. An activity like baking cookies together could provide an opportunity to work on all kinds of things, such as being organized and following directions, cooperation and taking turns, concentrating on the task at hand, and exercising patience (don’t eat the raw dough!).

One of the neat things the program did to facilitate these check-ins and allow the mentors to get input from more knowledgeable child development experts was to ask each of them to keep a diary about how the match was going, specific challenges expressed by the mentee, and areas where they felt like they could need some guidance. These diary entries were instrumental in letting the program staff know what was working and how they could offer specific suggestions for activities that might give children additional opportunities to work on areas of need.

The diary entries also provided amazing content for use in the program evaluation that shed light into how exactly the activities of the relationships were helping youth grow and learn those valuable life skills. For example, diary entries around arts and crafts and cooking activities illustrated just how children were learning organizational skills. In fact, coupled with other data, they were able to show that the more often children engaged in those activities, the less often they did things like forgetting to bring their books to school. Other programs may consider having mentors fill out diaries or other robust activity reporting forms as a way of knowing what’s happening under the surface of relationships.

3. Good examples of evaluation designs that fit the program.

The two evaluations of the Baloo and You program offer a few interesting wrinkles that other practitioners could learn from and mirror in their programs.

  • Good strategies for getting reliable information from younger children – One of the challenges in serving younger children like this program does is how to get reliable and accurate information from children who may struggle with concepts being asked about, may face challenges in filling out pencil and paper scales, or simply might struggle to reflect on their experiences. The Kosse paper in particular has a nice section describing how they did interviews with the children and how they used games and other interactive play to do things like establish baseline assessments and show gains at the end of the program. Programs working with younger children might learn some interesting techniques from these articles.

  • The value of multiple control groups – One of the most interesting aspects of the Kosse article is the use of two control/comparison groups—youth from low socio-economic status (SES) households (which mirrored the treatment youth) and youth from high SES households. The researchers obviously wanted to know if the program could improve mentees’ prosocial behaviors in comparison to their peers, but they also hypothesized that mentees might wind up, through the relationship offered in the program, “catching up” to youth growing up in higher SES households (where they theorized that youth would have access to more caring adult relationships and opportunities to build strong prosocial skills in a variety of settings).

    Sure enough, the mentored youth did outperform their low-SES peers and were essentially indistinguishable, even at the two-year follow up, from their higher SES classmates in terms of their prosocial behavior. But credit the evaluators with digging deeper. They also examined the results of the program through the lens of the mothers’ own prosocial behaviors. While it is true that mothers from low-SES households did tend to score lower on measures of prosocial behaviors, it is entirely possible that high-SES children can also find themselves in homes where the adults fail to provide sufficient interactions and activities to build prosocial skills. This provides a more nuanced view of who might benefit from the program than just household income. And as predicted the findings showed that children whose mothers displayed low levels of prosociality were most likely to benefit from the program, illustrating that the work with the mentors offered these youth something that could fill in gaps in what they were perhaps not receiving in the home.

    By using multiple comparison groups and by examining factors beyond simple SES status, the program was able to unpack some of the mechanisms of change at the heart of the program and learn quite a bit about the types of youth who might benefit most from the program moving forward.

For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources for Mentoring Programs" section of the National Mentoring Resource Center site.

Page 1 of 5

Request no-cost help for your program

Advanced Search