Check & Connect

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the CrimeSolutions.gov website.


In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “No effects” (that is, a program that has strong evidence that it did not achieve justice-related goals).

1. Flexibility in implementation can be a blessing and, perhaps, a curse.

One of the real conundrums mentoring programs face is how they can both build on and implement research-based “effective” practices and program models while also allowing for enough flexibility to customize an intervention or specific practice for local context or needs. It can be a challenge to take something that worked in one place and apply it to a new population of youth, a new city or school, or to vary up aspects of the work to match the availability of local resources. There is a whole body of research devoted to these questions of implementation science: What components of a program are critical to keep as is and which can be tweaked or even discarded? Did something that worked well for one group of kids work for a different one? Are there contextual factors that doom some efforts before they start? When is a model no longer a model?

Check & Connect is a program that has been thoughtfully developed and that has garnered considerable interests from education institutions and nonprofits around the country looking for a solution to issues of school truancy and disengagement. Both of the evaluation reports discussed in the Crime Solutions review mention the many previous implementations of Check & Connect around the country, reviewing both the findings of those prior efforts and the role those findings played in the decision to implement that specific model in these new settings.

But what is striking about the two implementations of Check & Connect in this review is how different they are, despite being essentially the same program model. The study by Guryan and colleagues (Guryan, et al., 2017) focuses on a test of the program serving youth grades K-8. The average student in their cohorts was around eight and a half years old, likely a 3rd grader. The other study, Heppen et al., 2017) tested Check & Connect with students starting in grade 10, meaning that they received support from their mentor in their sophomore and junior years of high school. Paradoxically, the authors of the Heppen study conclude that the Check & Connect model probably works best for younger students that are not already so credit-deficient in 10th grade while the Guryan study was pretty clear that the strongest outcomes were for the older students who participated in 7th and 8th grade. It was far less impactful for younger elementary students. This does not mean that these two studies together have inadvertently found that the sweet spot for this program model is middle school students, but it does highlight just how much results can differ from one program model across the ages of youth served.

The differences in implementation across the two studies goes deeper than just the ages of the youth. In the Guryan study, parent engagement is described as one of the four cornerstones of the Check & Connect intervention and, indeed, mentors contacted parents or guardians and average of twice a month. In the Heppen study, the role of parents is scarcely mentioned and their engagement or contact with mentors isn’t reported⎯their role seems limited to being involved in more intensive interventions “when necessary” for some students (p.4). The mentors in Guryan were employees of a community-based nonprofit hired to work in the schools, whereas in Heppen they appear to have been district employees tasked with serving students in this role. In one study, mentors has a caseload of 30 students, in the other they typically had 50-60⎯double the number of students.

And although both evaluation reports note that referrals to other services (e.g. mental health providers, dedicated tutoring, or broader wraparound services for the family) are absolutely critical for the intervention to succeed, neither report notes the volume or nature of those referrals or their impact on the outcomes studied. The authors do note that earlier implementations of Check & Connect focused on students with physical or learning disabilities, so perhaps those referrals were more important in those contexts. But still there is little in these reports about how much mentors when beyond themselves in offering help to youth and families.

Both reports here note in that, in each instance, Check & Connect was implemented with fidelity, as intended. And both programs had mentors go through the recommended training and used the manuals offered by the developers. But in reading these two evaluation reports, one can’t help but wonder if these two programs were so dissimilar as to leave the question of what this model truly looks like at peak implementation, or who it serves best, up for some debate. This may be a case where the general flexibility of the intervention may encourage its application in contexts where it will be more challenging or when another approach might be more effective. Sometimes rigidity of implementation may be the practitioner’s best friend.

2. How much time together is needed for a relationship to be meaningful and “close”?

One of the most intriguing aspects of the Check & Connect model is that it last for two full years of school for participating students. That’s a long time in the context of most school-based mentoring programs and it also crosses multiple school years, something that can be a challenge for in-school mentoring programs but that research suggests might be critical for sustaining impacts. Twenty-four months of a mentor at school checking in with you and helping problem-solve issues at school and home sure sounds like it would allow for some real bonding and trust-building between mentor and mentee and plenty of depth to their interactions. In fact, the descriptions of the role of mentors in Check & Connect in the studies discussed in this review describe the meaningful social capital that these mentors provide and the critical role that meaningful relationships play in helping students overcome barriers to success in a socialized context like a school.

But even at two years per relationship, there are questions about how much mentoring happened in these implementations of Check & Connect. This has been a topic of debate in the mentoring universe for some time and the studies presented in this review do little to quell that conversation. In the Guryan study, mentors met with students five times a month on average, although some of that was group meetings with other students the mentor was working with. They also noted that the level of engagement varied quite a bit from mentor to mentor. Each contact with students was described as brief and intended to provide a “nudge.” In Heppen, mentors met with their students 37 minutes a month in Year One, 50 minutes a month in Year 2, and 61 minutes a month in Year 3, with around 20 minutes a month over the summers across the two years. Given what most mentoring programs offer, that is very light interpersonal contact⎯MENTOR’s 2016 National Mentoring Program Survey found that approximately 77% of the nation’s programs offer youth at least 90 minutes of mentoring a month (Garringer, M., McQuillin, S., & McDaniel, H., 2017). These mentors simply weren’t spending very much time with their mentees. And they were split for time across 50 to 60 students! That sounds like a situation where every student was getting that “check” but one wonders if these relationships brought the social capital, personal touch, and authenticity of interaction that is assumed in the model itself. One wonders if these relationships were… relationships. Other research has hinted that even in-school mentoring relationships need some opportunities to experience fun and playful one-on-one interactions that support bonding and closeness⎯this has also proven true in other out-of-school mentoring interventions for youth who are struggling with challenging issues. It doesn’t seem clear, from the two implementations of the model provided here, that Check & Connect is allowing for that. This is an issue that has been noted for other narrowly-focused school-based mentoring interventions, but there is also evidence that is fixable when programs place a bit more emphasis on that relational bonding. It’s not clear how that might happen with a caseload of 50+ high school juniors.

3. When designing an intervention for youth with attendance issues, plan for their inherent mobility.

One of the frustrating aspects of these two evaluations of Check & Connect is that the mobility of the students themselves influenced the delivery of services and the ability to achieve the stated goals of the programs. The Heppen study, in particular suffered from students moving within and out of the district. Those students had much worse outcomes than students who stayed either at the same school or in the district. While there are many practical reasons why mentors would have a very hard time continuing to follow up with students who moved out of the district, it is also true that the developers of these programs had plenty of knowledge that this would be an issue. If the goal was to continue to offer the intervention as intended to these students once they enrolled, one wishes that they had a better contingency plan⎯dedicated staffing for students who moved out, increased contact with parents when students moved, some ability to travel in-person to neighboring districts, e-mentoring platforms where the relationships could continue⎯to serve students more effectively. The authors in Heppen conclude that

“for highly mobile students, large caseloads may prevent mentors from being able to track down and spend an adequate amount of time working with all of their students… Mentors in this study attempted to connect with students who left district schools, but they found this process difficult and time consuming and were concerned that it was taking time away from other students on their caseload. In response, the program developers and implementation team decided during the study to prioritize delivery to the nontransfer students.”

In other words, they seem not to have prepared adequately for the mobility of a group of students that by definition tends to have high mobility and, as a result, shifted gears on these students mid-study. Practitioners designing services intended to send students on a path towards course completion and graduation like this may want to prepare for that more in the future and draw clearer lines about how those students will and won’t be served.

4. Think about how your program can access outcome data at multiple layers and long after services have ended.

The Heppen evaluation offers a nice example of how a school-affiliated program that wants to increase school completion and graduation can track those outcomes even if they are set to occur many years after youth have participated in the program. The evaluators made sure to secure access to school district records to see if youth who participated as freshmen, sophomores, and juniors eventually graduated high school or would up getting a diploma even years later if they happened to leave school prematurely. But as noted above, their study had quite a bit of attrition as youth moved across schools and even out of the district altogether.

To address those circumstances, they went up a layer in the education data system, accessing student records at the state level. Many states have become particularly skilled in collection and sharing student data for research purposes through statewide longitudinal data systems. Programs and evaluators are encouraged to learn what these systems in their areas can provide and use that information to help determine the true impact of programs over time and when students move away (but are still within the state).


For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources for Mentoring Programs" section of the National Mentoring Resource Center site. 

Request no-cost help for your program

Advanced Search