iMentor

Insights

In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “Null Effects” (that is, a program that has not produced evidence that it achieved its core justice-related goals).

The iMentor program has long been one of the leaders in offering blended in-person and e-mentoring programming in schools, attempting to marry the close personal connection that comes from in-person interactions with the convenience and timeliness of technology-facilitated communication. Their model represents an ambitious approach to making a big difference for high school students, with the program attempting to serve every student in the school for all four years in addition to a number of additional services and supports designed to strengthen the student experience and the school itself. The evaluation report referenced in this review describes how iMentor staff are embedded into the school day, offering classes that teach both social-emotional skills appropriate for this age and the transition into post-secondary roles, as well as post-secondary planning activities designed to ease that transition.

This model has been recognized as promising in the past, enough that it qualified for a Social Innovation Fund grant that expanded the program and allowed for the rigorous evaluation being referenced here. But like many of the grantees that participated in the Social Innovation Fund, their evaluation results were perhaps more muted than one might have expected, for a variety of reasons. This innovative public-private initiative asked programs across many sectors to scale up their services and conduct rigorous evaluation, often on parallel schedules. Although some programs produced evidence of impact while also scaling, many experienced challenges along the way—with iMentor squarely fitting into that category. This shows just how challenging it can be to build strong evidence of impact, even for very well-planned and promising models like iMentor.

So, what factors may have contributed to the lack of meaningful measured impacts in this evaluation? There are some clues we can look to in the evaluation report that other practitioners can learn from:

1. Is it a good idea to attempt to serve a whole school? There are many reasons why iMentor attempts to serve all of the students in a school: They specifically seek out schools with high proportions of low-income students and students who may face challenges in college access or in shifting into post-secondary career paths. Their model also relies on integrating itself into the school itself, with iMentor staff teaching college readiness classes and coordinating the work of mentors with teachers and other faculty. It makes sense to serve as many students as possible with a model like this and making the program something that all students experience likely helps with scheduling, coordination, and allocation of resources, in addition to addressing equity of opportunity. But that desire to serve everyone may come at a cost. Mentoring relationships are likely most valued by young people when they are voluntary and based on expressed needs. While students were not mandated to participate in iMentor, the program here suffered from tremendous drop-offs in participation over the four years the program was offered. Participation rates were highest during freshman year, when many students may have been worried about high school and seeking supports that could help them with this stage of their educational journey. But even then, it seems likely that the program was being offered to many students for whom it was a “nice to have” not a “must have” form of support. As they progressed through high school, many of them may have found their footing academically, benefitted from other services and interventions, or just generally lost interest in what the program was offering if it didn’t seem tailored to their needs. By the end of the four years, participation rates were very, very low at all eight of the participating schools. One wonders if those rates may have stayed higher if the program was more focused in who it served and directed their resources toward those students who needed this form of support the most.

The good news is that the students still rated their mentors as positive influences and reported really liking and valuing their relationships. But the lack of participation suggests that this program wasn’t quite what many students needed or wanted and that they may have drifted to other programs or forms of support. The potential negative ramifications of attempting to serve a whole school or whole grade is something we have noted before in these reviews, so practitioners should think carefully about whether their program, good as it may be, is something best offered to literally everyone.

2. E-mentoring likely requires lots of nudging and reminding. One of the major themes of this evaluation report was that all eight schools suffered from implementation challenges that resulted in students not having nearly the volume of interactions with their mentors that they were supposed to for the model to be maximally effective. These issues of “dosage” of mentoring come up frequently in mentoring evaluations, and it makes complete sense that in order to benefit from something, you have to experience it. These interaction challenges can take on increased importance in e-mentoring models, where there isn’t an expectation of frequent in-person engagement and participants are more prone to an “out of sight out of mind” break in communication. Research on e-mentoring has suggested that these programs often require program staff to do a lot of intentional prompting of mentors and youth to communicate, sending out discussion questions, weekly questions and activities, and all kinds of other manufactured opportunities just to keep matches communicating digitally and maintaining the momentum of their texting/emailing. The iMentor program did do these types of things, noting that their Program Managers sent out a weekly prompt and also tracked interaction rates, offering additional support to pairs that were not communicating on the platform as often as they should. Students were expected to reach out to mentors during class and the mentors then were supposed to write back before the next one—with a baseline goal of a 55% rate of interaction. That simply may not have been enough opportunity to interact, not enough compelling reasons to reach out and discuss things.

Interestingly, texting and talking on the phone between mentors and youth actually increased over the four years of the program, suggesting that perhaps the activities and prompts in the classroom and course platform were not compelling enough to produce meaningful engagement. Regardless of the reason, this is a good reminder to all e-mentoring programs to really try to spur engagement and to listen to participants about the platforms that might work best for their interactions. Unsurprisingly, the matches that expressed the most closeness, had more interactions, both within the platform and over the phone. The evaluators noted that some of their metrics around implementation and interaction volume may have looked stronger if they had thought to track those phone-based interactions more diligently. But there is a real chicken-egg thing at play: more interaction opportunities can build closeness and more closeness builds more interactions. E-mentoring efforts must pay close attention to both aspects.

3. In spite of many challenges, mentoring programs can always make a difference for some young people. Although the evaluation did not show statistically-significant impacts on the core outcomes they measured, it is worth noting that iMentor students were 8 percentage points more likely to graduate than the students in the control group. Now, we can’t be sure that it was iMentor that produced that difference, especially since the program did not show impacts on attendance rates or credits earned. But given that school reform and integration is a core aspect of the iMentor model, it’s worth speculating as to whether those embedded Program Managers, and the robust tracking of student academic performance and program engagement, didn’t have some impact on those rates of completion. Perhaps students in these schools got just a bit more support from teachers because they had an iMentor staff member checking in with them. Perhaps mentors were able to do just a bit of tutoring here, some encouragement to persist there, that added up and made a real difference in whether those students persisted through graduation. Maybe the parents of iMentor students placed just a bit more emphasis on school success. It’s hard to say, based on this evaluation, what caused that 8% difference. Maybe that’s the result of a whole lot of small improvements, spread across many areas, popping up at the end of four years in an unexpected positive finding.

It’s a nice reminder that even in mentoring programs in which the youth as a whole don’t appear to have benefitted much in the ways measured, that there are individual youth benefitting and growing from the experience. It may not have been the massive success for all who participated, but it very possibly helped many of these students in ways that will persist well into their adulthoods. It certainly may have made a difference for that 8% that graduated and even when evaluations like this one seem disappointing, it’s important to remember that many young people likely were helped by the experience. It’s also important to remember that by doing a rigorous evaluation thanks to the generous investment of the Social Innovation Fund, iMentor now has a deeper understanding of its model and an opportunity to build on some of these signs of success.

For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the “Resources for Mentoring Programs” section of the NMRC site.


Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by our Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read this program’s full review on the CrimeSolutions.gov website.