My Life Mentoring

Insights

In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “No effects” (that is, a program that has strong evidence that it did not achieve its justice-related goals).

1. Customization of services for youth in foster care and with disabilities is a must.

There are several examples in evaluations of this program where efforts made to really customize the program to the needs of foster youth, especially those with disabilities, are described. For example, the TAKE CHARGE curriculum at the heart of the My Life model was originally designed to be used with all youth, with the idea being that self-determination skills are helpful to all young people as they approach young adulthood. But the program customized this core idea of the curriculum to focus exclusively on goals related to foster care transition or educational attainment, something these youth tend to struggle with. They also infused specific skills and competencies that would support their goal setting, with revised lessons focusing on skills like establishing “support agreements” with caring adults and how to work effectively with key individuals, such as judges, attorneys, and child welfare professionals. They also allowed for deviation from the set sequence of lessons in the traditional TAKE CHARGE model. Because youth in foster care often are in crisis or are wrestling with challenging situations, the program allowed for earlier delivery of lessons on key skills like problem solving and seeking help before moving into the goal setting work. This allowed youth to address immediate concerns as needed and set a more stable foundation for thinking about goals and setting plans of action.

In addition to customizing the curriculum, the program also placed greater emphasis on who was serving in the coach/mentor role, choosing to emphasize the recruitment of individuals who were slightly older than participating youth and who had been in foster care or wrestled with a disability themselves. This is another example of programs opting to use “credible messengers” when working with vulnerable youth, with the idea being that these individuals can relate to the struggles and life histories of mentees more effectively. By having near peers who had overcome similar challenges in the mentoring role, these youth were also offered a powerful example of what their perseverance might translate into. One can imagine that it’s easier to think about literally “taking charge” of one’s life when the person encouraging you to do so is a living example of what can happen when that commitment is honored. Many of these mentors were program alumni and they were often enrolled in higher education or working on solid careers. Have a strong example of a potential “future self” in the mentoring role surely plays a role in helping mentors and mentees bond and work collaboratively together.

Other small nuances of the program also reflected just how customized the intervention was for these youth—for example having the weekly coaching sessions at the youth’s home or school at a time that worked for them (such as a free class period), an approach that recognized that these youth often had hectic schedules and placement moves and that reaching them wherever they could with flexibility would be important. Those types of tweaks to the services, to the curriculum, and to the background of the mentor all seem likely candidates to have contributed to this program “working” for participants. It’s worth noting that in each iteration of the program, there was very little attrition of youth participants—they likely felt right at home in this program built for them.

2. Self-determination is a powerful tool with working with youth who have a lot of their daily life tightly controlled.

What’s interesting across the evaluations here is that self-determination was used to a few different ends, but had elements of success each time. The initial Powers (2012) evaluation focused on using self-determination to boost transition planning and outcomes, while the Geenen (2013) study looked at educational planning and attainment for a similar (but slightly younger) group of foster youth. The Blakeslee and Keller study (2018) then looked at the longer-term juvenile justice impacts from both cohorts. But in all three cases, it seems like this self-determination approach had a real impact, with several of the studies reported evidence of positive changes with medium to large effect sizes—a magnitude of impact rarely seen in mentoring programs.

We’ve written about self-determination as a key ingredient several times for the NMRC, most notably in the practice implication sections of our evidence reviews on Mentoring Youth in Foster Care and Mentoring for Youth with Disabilities. Readers are encouraged to look at those publications for more detail about the value of self-determination coaching in the context of mentoring. But at the end of the day, self-determination is a form of empowerment. And whether it’s in Torie Weiston-Serden’s radical new idea of a “critical mentoring” approach or the tried and true 5 Cs of youth development, we all recognize that empowering youth to be active and engaged in not only their own future but in contributing to the world around them is one of the core principles of good mentoring. It may take weekly coaching, and a lot of hand holding and trial and error, but the TAKE CHARGE approach seems to really help youth find the right blend of mindset, motivation, and external support to make a difference.

3. Using prior data as a jumping off point for future program evaluation.

Practitioners should take note of something really clever about the Blakeslee and Keller study. Their longitudinal findings involved only one original data collection point: “time 4” which was several years after youth from the Powers and Geenen studies had exited the program. They were able to build off the great data sets that had been collected during those prior two federally-funded studies by doing a simple survey follow-up with participants and some records checks to find out about subsequent justice system involvement. This allowed them to do a really strong longitudinal investigation of program impacts at a fraction of the cost as it simply built on the evaluation investments that had come before. This is also a great example of researcher collaboration, as not all scholars are as comfortable having someone else build on their data as the example here. But for any program that’s wondering why they should do rigorous data collection or whether the data collected from previous evaluations is worth anything down the road… here is a clear example of two researchers doing something really meaningful and new by building on data collected in the past. Blakeslee and Keller already knew that the program had good evidence of producing some meaningful effects. By doing just another round of data collection they were able to not only show evidence of the program contributing to reductions in criminal justice involvement, but were also able to calculate cost-benefit estimates that can help draw more attention and funding to the model. Their estimates suggest that the program not only pays for itself, but that it could also save taxpayers money if implemented at scale. Hopefully this example can get practitioners thinking about what they could do with their old data and how they might be able to build on prior data collection efforts with some clever follow-up activities.

4. A good example of cost-benefit calculations that others could emulate.

One last thing of note about the Blakeslee and Keller study is that it offers a very detailed explanation of exactly how they calculated the costs of the program, as well as the savings associated with various program benefits. Costs were calculated in a way that would be very helpful to policymakers, especially in calculating the costs to serve a youth in the existing program and separately calculating the costs if the program needed to be started up from scratch. This can help funders and policymakers in deciding whether to fund or expand the existing program or invest slightly more in starting up new replication sites—with the costs of each option clearly articulated, as well as the details of how those costs were determined.

On the “benefit” side of the equation, the authors were able to detail not only the costs of several negative youth outcomes (e.g., a night in jail) but also the savings that the program shows evidence of actually producing based on the sample studied. This allows funders and policymakers to see how the program may impact specific outcomes of interest and clarifies exactly how the program may well benefit taxpayers. They even instruct readers on how to calculate the benefits of this model in other parts of the country by plugging in their own cost estimates for things like incarceration and court costs.

In the end, they estimated that My Life’s benefits exceed its costs by up to three times, depending on whether one is replicating the model or simply funding the existing site. Programs and evaluators they are working with would be well served to review this section of the Blakeslee and Keller report as it offers a stellar example of how to calculate program costs and estimated benefits and how to present them to stakeholders in a way that doesn’t overstate the estimated impact but also makes it clear that further investment in the model may likely be money well spent.

For readers who want to learn more about the Blakeslee and Keller study of My Life should listen to the Reflections on Research podcast from season 1 that featured both researchers describing their study and findings, including that cost-benefit work, in great detail.


*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the National Mentoring Resource Center website.

For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the “Resources for Mentoring Programs” section of the National Mentoring Resource Center site.