Displaying items by tag: School environment

Thursday, 22 January 2015 08:47

Brief Instrumental School-Based Mentoring Program

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the new full review on the Crime Solutions website. This review has an updated version of the program based on a fresh evaluation of the program. You can read the previous version’s insights here.


There are several notable features of the Brief Instrumental School-Based Mentoring Program that others providing mentoring in a school setting can learn from and think about building on. One of the most interesting conceptual ideas behind this program is that it builds on the theory, developed by researchers such as Tim Cavell and others, that a mentoring relationship can be viewed as a “context” in which other meaningful work occurs, rather than the relationship being the sum and the end of what is provided to a child. This idea speculates that perhaps it’s not the close and long-term bond between a mentor and mentee that matters as much as it’s the transformative power of the activities they engage in and the skills that get developed within the context of the relationship. This program very intentionally built on this idea, placing more emphasis on the sequential set of activities that matches did over time than on the fostering of a deep and long-lasting relationship.

This approach may have special appeal for schools or other institutional settings where building a meaningful long-term relationship can be difficult. The chaotic nature of the school environment, the frequent breaks in the school calendar, the relatively short duration of a school year, and youth moves between schools all somewhat preclude the establishment of the type of intense and lasting relationship we typically associate with mentoring. So the developers of this program decided to try something different in a school setting: They intentionally shortened the time that mentors and mentees would spend together and put a lot of thought and effort into developing a meaningful set of sequential activities that would, in theory, provide the youth with meaningful support around their academic achievement, school behavior, and connectedness to school and teachers.

This more instrumental form of mentoring, where purposeful activities drive the interactions between mentor and mentee, holds promise for programs set in schools (for example, a similarly-instrumental program reviewed by the NMRC Research Board was rated as “promising”). There were several program features that brought this more instrumental approach to mentoring to life:

  • The concept of academic enablers. One of the core ideas promoted by the program’s activities was the concept of academic enablers, things like good study habits and organizational skills that research suggests are pretty malleable and that can really boost a young person’s academic performance. This seems to be a good fit for the middle school population served in this program, since middle school is a time when more gets asked of students and those with poor study or organizational skills may begin to struggle with coursework and tests for the first time. The developers of this program built the activities on a framework from other research involving academic coaching, testing the idea that mentors could provide much of the same support.

  • Counseling skills. To facilitate the academic-focused work, and building on the “coaching” concepts that informed program activities, mentors in the program were trained in several strategies that have proven to work in other settings, specifically counseling strategies such as motivational interviewing. This client-centered conversational style helps clients (in this case students) set and achieve goals, primarily through facilitated conversations that follow some basic principles: “Ask, Don’t Tell”, “Avoid Argumentation”, and “Support Progress Toward Goals.” These approaches help the relationship avoid conflict and the feeling that the mentor is being “prescriptive” toward the youth, while making sure that the mentor’s words and wisdom are applied toward realistic goals that the youth has set for himself or herself. Along with the training mentors received in these strategies, they were monitored to ensure that they followed these principles in their meetings with their mentees.

  • An emphasis on Connectedness and Life Satisfaction. The program also tried to boost the feelings of connectedness to schools and teachers, as well as the student’s overall life satisfaction. Research indicates that both of these influence academic performance and engagement with school, so several program activities were designed to support growth in these areas, mostly by helping youth identify and overcome challenges and build skills that would help them experience academic success they could build on.

Given that this program was built on these concepts and principles that had proven to work in other settings, why did it not produce stronger results? The mechanisms for change seem very well conceived, yet the program produced only a limited number of positive effects (improved math grades, reduced disciplinary referrals, small increases in life satisfaction) while failing to “move the needle” on most of the outcomes the program was designed to address.

It turns out that perhaps the closeness and depth of that mentoring relationship matters more than we might think:

  • The mentoring relationship lasted all of eight sessions. The idea behind these relationships was that they would be more of a “working alliance” than an intimate, sustained relationship. And there is no doubt that these matches did bond some and got along well enough to engage in meaningful activities together. But only the first two of the eight sessions focused on relationship building and getting to know each other. After that, the remaining six weeks were focused either on goal setting and building academic enablers or problem solving and giving the youth feedback on their progress. While eight sessions with a mentor can surely be meaningful, one has to wonder just how close these youth felt to their mentors, how much weight they put on their words, and how deeply their messages sunk in. Ostensibly, there is a world of difference between having an adult you work on things with versus having an adult whose words and widsom carry power, whose opinions deeply matter. Unfortunately, the researchers behind this program did not measure relationship closeness. It would have been interesting to see how the participating youth viewed their mentors. One wonders what the results might have been if this program had emphasized the “brief” and/or the “instrumental” just a bit less or otherwise figured out a way to give the participants more time to build a stronger relationship rather than an “alliance.”

  • The mentoring meetings all happened in a designated mentoring room. Although not all matches met at the same time (they happened during youths’ free periods), it seems likely that matches at times were sharing this space with other mentors and mentees. The study doesn’t shed much light onto what the environment of the room was like, but one can imagine it being fairly chaotic and loud and not offering much privacy or room to really bond or discuss a sensitive topic.. School-based programs can really struggle with the question of where matches meet on campus, but it can be especially challenging if multiple matches are meeting in the same space. This could have been another barrier to relationship quality and the subsequent outcomes of the program.

So this winds up being a program with a very promising design, but one that fell short of some expectations in terms of outcomes. It based its design on the notion that the mentoring activities would matter more than the depth of the mentoring relationship. Without measuring relationship quality we don’t know if this assumption was necessarily proven wrong. Yet, as discussed above, there is reason to suspect that the activities could have benefitted from a stronger underpinning of a meaningful relationship between mentor and mentee.

The good news is that the researchers behind this program are continuing to adjust accordingly and are currently trying a next iteration of the program that might produce stronger results. In fact, the version of the program reviewed here was actually built on a previous program that had been evaluated with worse results. So with each iteration of the program, they are evaluating and refining the model, adding carefully selected elements and concepts designed to fix the gaps revealed in their research. This kind of evaluate-refine-reevaluate approach is commendable and something that all programs should aim for at some level. The only way to build a better program is to evaluate its effectiveness, learn where there might be weaknesses, and try again. This instrumental approach to school-based mentoring makes a lot of sense, and the hope is that a future version of this program might find the right balance between academic supports, instrumental activities, and meaningful relationships.

For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resourcessection of the National Mentoring Resource Center site.

Evidence Rating: Promising - One study

Date: This profile was posted on August 08, 2017


Program Summary

This is a school-based, one-on-one mentoring program designed to improve academic performance and life satisfaction, and decrease absences and behavioral infractions among middle school students. The program is rated Promising. The intervention group had significantly fewer unexcused absences, and significantly higher math and English grades and self-reported levels of life satisfaction. However, there were no effects on school-reported behavioral infractions or grades for science or history.

You can read the full review on CrimeSolutions.gov.

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read this program's full review on the CrimeSolutions.gov website. 


In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “Promising” (that is, a program that has shown some evidence that it achieves justice-related and other goals when implemented with fidelity).

Update: One of the nice features of the Crime Solutions rating process and listings of reviewed programs is that these reviews are subject to change if a program undergoes additional evaluations over time. Subsequent evaluations will often confirm that a program model is effective, or perhaps even indicate that it can be applied more broadly or with even greater impact under certain conditions. Other times, subsequent evaluations can give contradictory results, throwing the efficacy demonstrated in prior studies into doubt. In the case of the Brief Instrumental School-Based Mentoring Program, the revised version of which had risen to the level of “Promising” based on studies from several years ago, it remains at the “Promising” designation, although the newest study did have decidedly mixed findings.

It is worth noting that in the most recent evaluation of the program, there were positive estimated impacts on the youth participants: significant positive estimated impacts on school behavioral infractions, math grades, and students’ reports of emotional symptoms and school problems relative to their peers. But this program has shown both positive and null effects on a number of outcomes and across multiple studies. What is a practitioner to make of a program with a bit of a “mixed bag” when looking at effectiveness? We unpack a few of the reasons for these mixed results here and emphasize things that practitioners may want to keep in mind when designing and evaluating their programs. The original Insights for Practitioners from 2017 is also included below as it still offers plenty of good food for thought that mentoring professionals will want to consider.

1. Beware the perils of looking for too many outcomes, too often.

One of the great things about this program model is that the developer, Dr. Sam McQuillin of the University of South Carolina, is constantly tinkering with the program, making small-but-meaningful tweaks to the program over time in an effort to improve it. In fact, you’ll notice below in the original version of this Insights that we praised the program for continually tweaking and testing the program in a continuous improvement mindset. It was this process that got the revised version of this program to a “Promising” rating after the original version of it was rated as “No Effects.”

But there is risk every time you tweak a program’s delivery or evaluate its outcomes. There is also risk is looking for a wide variety of outcomes that, while relevant to the work of the program, may not be of critical importance to determining if the program is achieving its mission or not. Both of these factors played a role in the mixed results of the new study. This latest evaluation was an attempt to try some new aspects of the training and serve youth with the most need in the participating schools. McQuillin and McDaniel (2020) noted that this iteration of the program sought to serve the “middle school-aged adolescents who displayed the highest levels of school-reported disciplinary infractions,” while also studying the effectiveness and ease of delivery of the motivational interviewing-based activity curriculum. Both the population-specific effectiveness and ease of “delivery” of the intervention are good things to test out and any opportunity to learn more about the program is likely beneficial. But positive outcomes aren’t always a guarantee, even for a program that has produced them in the past. And every fresh look is a new chance to “fail.” Or at least not get an “A.”

This latest evaluation of the program did find a number of positive outcomes for participating youth, as noted above. But it also failed to find them in a number of other categories: grades other than math and teacher-reported outcomes like behavioral problems and academic competence. The other evaluation recently conducted that informed this updated review also failed to find significant positive effects for youth in the program compared to their unmentored peers. The Crime Solutions rating system is a high bar to clear for programs, and with good reason: We must ensure that public investment in prevention and intervention efforts actually do reduce crime and delinquency and related factors. But it is also a rating system that disincentivizes research into program models, or innovations that might improve a model that has already been effective, assuming the goal is to have one’s program found to be effective, because every evaluation undertaken and every outcome examined, winds up being another opportunity to get an unexpected result that can  lead to a less desirable classification of the overall evidence that  contributes to a public perception of the program being ineffective (or at least less effective than previously suggested). There are many programs in Crime Solutions that achieved nothing of value and should be noted as such. This program avoided seeing its rating downgraded, but in general, lumping programs that have demonstrated some positive impact in with those others seems potentially misleading at best and inaccurate at worst. Thankfully that was avoided here, but this kind of downside to looking at too many outcomes at once, or continually engaging in high-quality evaluations, should be noted by programs and evaluators alike as it can lead to some negative public consequences.

2. College students might be a good fit as mentors if your program wants to integrate a complicated new curriculum or emphasize fidelity.

One of the interesting aspects of the implementation of this program is the use of college students as the mentors. This program utilizes a curriculum and set of activities based on Motivational interviewing (MI)—a strategy for changing how people think and behave without ever, in essence asking or “prescribing” them to do so directly— that is notoriously challenging to master, even for experienced therapists and social workers. The mentors in this instance were college students, many of whom were psychology or related majors who might have an inherent interest in learning about MI. One of the main goals in this study was to see if the mentors liked and could use the materials and MI strategies effectively. In the McQuillin and McDaniel (2020) study, these college students reported that, for the most part, they found the material and techniques of the program to be acceptable, appropriate, and understandable. But one wonders if mentors who come from all walks of life and who might not be as interested in a concept like MI would offer similar ratings.

Thankfully, Dr. McQuillin is partnering with MENTOR, a national organization devoted to supporting mentoring programs with training and technical assistance, to pilot test materials based on this program in a wide variety of programmatic settings to see if mentors generally find the materials to be helpful and usable, or if there are subgroups of mentors, such as college students, who might be more effective in their use. The hope is that other mentors can use MI as effectively as college students have in the several trials of this program.

3. Programs should be encouraged to try out new strategies and mentoring techniques!

In spite of some of the challenges around properly quantifying the impact that this program has demonstrated on participating youth, the reality is that many mentoring programs are trying to find ways to make their mentors more effective and are turning to strategies like MI to try and boost their results. The most recent study of this program offers several great ideas to keep in mind. First is the importance of testing feasibility, understanding, comfort, and use when implementing a new approach or concept, as this program did. The evaluation report features a great chart showing all of the different things they asked mentors about: Were the materials easy to understand? Were you enthusiastic about using them? Did they seem to fit the mission of the program? Did they fit your style as a mentor? Did you have challenges trying this out in real life? Did you need more training and support? By examining all those aspects, the evaluation was able to determine that, at least in this program setting, perception of the training and use of the materials was positive. No program can do great work if the mentors haven’t bough in to the process and aren’t happy with the tools.

But there were critiques from the mentors. Some felt like the program needed parent “engagement” if not outright direct involvement if the MI work was to be successful. Perhaps more importantly, the mentors wanted more training and practice on the use of the MI strategies. They understood the materials. They “got” the concepts and were willing to use them when meeting with mentees. But even these really bright college students still felt a need for more practice and training, something that isn’t surprising given the aforementioned challenges of MI use.

I am sure the developers of this program are working on other studies that will tweak some things and test them to see if they can solve the “more training” issues. That’s what a continuous improvement approach looks like. Here’s hoping they don’t get inadvertently punished by policymakers once those studies are done for having the guts to keep looking and learning.

Original Insights for Practitioners posted in 2017:

In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “Promising” (that is, a program that shows some evidence that it achieves justice-related and other goals when implemented with fidelity).

1. The value of testing program improvements over time.

One of the best pieces of advice for youth mentoring programs is that they should always be tinkering with their approach and testing the results to see if aspects of the program can be improved. This can be as simple as adding fresh content to mentor training, beefing up match support check-ins, or directing mentor recruitment toward specific types of individuals because they might be a better fit for the role than previous mentors. Any good program will constantly be looking for subtle ways to make what they do better. And in the case of the Brief Instrumental School-Based Mentoring Program, we have a great example of the type of payoff that can happen with a program takes this kind of iterative approach to the development of the mentoring services over time.

An earlier version of this program was reviewed for Crime Solutions and found that have “No Effects,” meaning that it did not produce meaningful outcomes in many of the main categories in which it was trying to support youth. But as noted in the “Insights for Practitioners” for that iteration of the program, this is a program that has constantly evaluated and revised in a continuous improvement approach. In fact, the earliest version of the program showed evidence that it may have actually produced some harmful effects for youth. So the developers improved the training and content of the program and evaluated it again, finding that they had eliminated the harmful impacts but not quite reached their goal of meaningful positive outcomes.

So they revamped and tried again, once more emphasizing rigorous evaluation to see if their improvements worked. This version of the program added a host of improvements: the provision of a mentee manual, revised mentor training and supervision, more choice for mentors on match activities, and e-training and support. And these changes again paid off in a measurable way.

Figure 1

This figure, taken from the report of the evaluation conducted by McQuillin and Lyons, shows the notable improvements that are evident across the three versions of the program. In each of the outcome areas listed along the X axis, we can see that this latest version of the program outperformed the earlier versions, sometimes markedly, with several outcomes having effect size (level of impact) that are consistently well beyond the average of around .20 found in the last comprehensive meta-analysis of mentoring program effectiveness. This Figure is a powerful example of why we need to allow mentoring programs to have some missteps along the way so that they opportunity and time to try new ideas and improve weaker aspects of what they are trying to do. Unfortunately, many programs lose support and funding after an evaluation that shows poor or even less-than-desired results. Others may develop internal cultures that unfortunately do not support questioning current practices and thus miss out on opportunities to improve through innovation and evaluation. Policymakers often view programs as “working” or not, when the reality is that most programs need time to work out the kinks and a chance to find better ways of delivering their services. The story of this program really highlights why practitioners need to be constantly trying to improve what they do and why program stakeholders need to exhibit some patience as the program works toward its “best” design.

2. The more complicated the mentors’ task, the more supervision and support they need.

Among the many improvements that were made in this version of the Brief Instrumental School-Based Mentoring Program, perhaps none is more notable than the bolstering of the supervision and support offered to mentors. Although the evaluators of the program did not test to see if this enhanced supervision was directly responsible for the improvements in the program’s outcomes, it is quite plausible that this played a meaningful role.

For this version of the program, mentors were not only provided with considerable up-front training but also a wealth of support and guidance in delivering the curriculum of the program and integrating it into the relationship that each mentor had with his or her mentee. Keep in mind that this program asks mentors to engage in a number of highly particular conversations and activities with their mentees that build in a sequential fashion and provide youth with very specific skills and ways of thinking about themselves and their academics. Mentors are asked to engage in motivational interviewing, apply cognitive dissonance theory, and teach academic enabling skills. Doing these things well requires adhering closely to a set of phrases, behaviors, and talking points and the developers of this program didn’t want to simply do a training and leave that to chance (as they did in the past).

Before each session, mentors would meet with a site supervisor who would review the curriculum for that particular session, reinforce keys to delivering the content well, and answer mentor questions. After each session, mentors again met with site supervisors to go over their checklists of actions to see if they had covered all of the content they were supposed to during the session. Mentors were also encouraged to engage additional phone-based support with their supervisor as needed and were provided with a manual that they could refer to at any time to get more familiar with the content and practice the types of key messages they were supposed to be delivering to mentees.

The type of intensive “just-in-time” mentor training and supervision offered by this program is likely beyond what most school-based mentoring programs can offer their mentors with their current resources (it’s also worth noting that most programs are simply not asking mentors to engage in activities that are this tightly controlled and specific). Just about any program could, however, borrow the concept of pre- and post-match check-ins with a site coordinator as a way of boosting program quality. These meetings could be brief but critically important to reviewing what the match is focusing on, sharing information about what the mentee has been experiencing recently (at school or at home), and reinforcing key messages or talking points that have the potential to either help make the relationship stronger or offer more targeted instrumental support. As the evaluators of this program note, “One persistent problem in SBM intervention research is the confusion surrounding what occurs in mentoring interactions and relationships,” which results in not only challenges in improving the program but also in helping others to replicate proven mentor strategies in other programs. Using the kind of rigorous pre-post mentor support used by this program might allow programs to understand what mentors and mentees are doing when they meet and make sure that important aspects of the program are delivered with fidelity, even if those important aspects are not as complicated as those in this particular program.

3. Remember to borrow evidence-supported practices and ideas from other youth development services.

We have frequently encouraged mentoring practitioners in these “Insights” pieces to borrow ideas and tools from other youth serving interventions, both mentoring and beyond. The Brief Instrumental School-Based Mentoring Program-Revised offers a great example of this in their sourcing of the “academic enablers” used by mentors. Rather than inventing a new set of tools and techniques, the program simply borrowed and adapted the materials from an intervention for children with ADHD that used coaches to help improve academic skills. The materials had already been used with good results in a short-term intervention, meaning they would fit into the timeline of the mentoring program. Their successful use with ADHD students previously also meant that they would be a likely good fit here with any mentees that had learning disabilities. So rather than reinventing a suite of activities to teach mentees about agenda keeping, planning, and organization skills, the program simply adapted something that already existed and fit their school-based model. This type of adaptation can save staff time and resources, while increasing the odds that the program is doing something that will be effective.

4. Don’t forget that mentoring programs are relationship programs.

The Insights about the earlier version of this program noted that there may have been challenges in developing relationship closeness given the program’s brief duration (8 sessions) and the highly scripted nature of the interactions (lots of curriculum delivery, not a lot of fun): “One wonders what the results might have been if this program had emphasized the ‘brief’ and/or the ‘instrumental’ just a bit less or otherwise figured out a way to give the participants more time to build a stronger relationship rather than an ‘alliance’.”

Well, this iteration of the program did exactly that by emphasizing more play, games, and freedom in session activities (provided that the core intervention was completed). As explained by the researchers: “the SBM program described in this study includes a variety of activities designed to promote a close relationship between the mentor and mentee because the quality of the mentoring relationship is theorized to be critical for helping mentees achieve their goals. Within an SBM context, brief effective instrumental models of mentoring that also include activities to develop close relationships may be an ideal ‘launch-pad’ for SBM programs with a developmental model to extend the period of the brief mentoring beyond the brief of mentoring.”

This serves as another example of this program learning from a prior attempt and doing something better (in this case strengthening the relationships themselves). It also provides some food for thought for any school-based mentoring program. Given that many school-based programs are brief and focused on fairly instrumental pursuits, how can these programs not only strengthen the relationships but also keep those relationships going once they get strong? It seems a shame to have a program forge new and meaningful relationships, only to use them in service of something that is, by design, short term. School-based programs should consider partnering with other programs or find other ways of using a “brief instrumental” program as a testing ground for relationships that can transfer to another program or setting if they “take root.” The developers of this program note that the ideal goal of school-based mentoring might be to “increase the immediate impact of mentoring on student outcomes and promote long-term mentoring relationships desired by developmental models of mentoring.”

It is unclear how many of this program’s brief matches (if any) went on to have longer-term engagement outside of the program. But all school-based programs should be asking “How can we keep these relationships going once they have tackled the targeting things we ask them to achieve?” Without exploring that, we may minimize the value of school-based mentoring and deny mentees and mentors the opportunity to grow something brief into something quite monumental.


For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resourcessection of the National Mentoring Resource Center site.

Wednesday, 29 November 2017 09:14

Bulletin Examines Trauma-Informed Classrooms

NOVEMBER 29, 2017
BY: ABBY LORMER, PROGRAM QUALITY AND TRAINING VISTA, MENTOR
Trauma Informed Classrooms

The National Council of Juvenile and Family Court Judges has released "Trauma-Informed Classrooms." This OJJDP-funded technical assistance bulletin provides an overview of the impact of trauma on students and explores how adverse life experiences can impact their behavior in the classroom. The bulletin also offers strategies for creating trauma-informed classrooms. This bulletin and the associated webinar will be useful for youth practitioners across the board because integrating a trauma-informed approach into your program’s policies and procedures fosters resilience and recovery for the youth that you serve. This information may be especially relevant to mentoring practitioners implementing school-based or group mentoring models, since many of its recommendations – like its discussion of common classroom triggers, for example – can be applied to programs that bring young people and adults together regularly in groups.


Reference:

Pickens, I.B., & Tschopp, N. (2017). Trauma-Informed Classrooms. National Council of Juvenile and Family Court Judges.


Published in NMRC Blog

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review of the After School Version or the full review of the Mentoring Version on the CrimeSolutions.gov website.


In considering the key takeaways from the research on this program (Challenging Horizons Program) that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “no effects” (that is, a program that has strong evidence that it did not achieve justice-related goals), as well as some of the more interesting aspects of the program design.

1) This program evaluation offers a great glimpse into “The Practitioner’s Dilemma”

In just about any program that tries to improve the lives of people there is always a tension (hopefully a healthy one) between what would be the optimal form of support and services that would have the most impact and the often disappointing reality of what is actually achievable in the real world given the available resources and the willing participation of those involved. The evaluation of the Challenging Horizons Program (CHP) in both its Afterschool (AS) and Mentoring (M) form offers a nice example of how programs can face challenges in offering implementing their theory of change in the most optimal way.

The reason that there are two versions of the program in the first place is that the developers of the program noted that implementations of the AS model had issues related to student attendance and overall attrition from the program. In their words, “staying after school 2 days a week for an entire school year is not desirable for many students”⎯to say nothing of the extracurricular obligations and transportation issues that may arise even for students who want to participate 2 days a week. Middle school students lead busy lives, so the Mentoring version of the program was developed to test whether a less-robust version offered during the school day could also lead to good outcomes, even if the developers themselves expected a reduced impact under the new format.

And why did they expect that reduced impact? (Which turns out to be exactly what their evaluation found…)

2) The differences in “dosage,” topics of emphasis, and delivery context really made a difference in how well these two models performed

The students served by the two versions of CHP were middle schoolers with Attention Deficit Hyperactivity Disorder (ADHD) who had already exhibited some challenges in school related to their condition. These youth differed from many of the typical students served by school-based mentoring programs in that they had a formal diagnosis of a condition that can really impact academic achievement and related aspects of life, like forming close relationships with peers and school staff. Overcoming the challenges presented by ADHD, specifically the difficulties related to keeping organized, completing assignments, managing time, and paying attention during instructional time, will understandably take a lot of concerted effort and time. These students literally needed to build a new set of skills, strategies, and mindsets related to schoolwork, and do this halfway through their time in the K12 system. So while the challenges were substantial, the two versions of the program approached it from pretty different perspectives, both in terms of design and execution:

  • The AS version offered students a full exposure to the curriculum of the program, targeting many different skills and strategies that students could learn and employ. The Mentoring version, however, usually whittled that down to one or two (80% of the participating youth got 2 or fewer pieces of the intervention).
  • The youth in the AS version of the program got significantly more time in the program working directly with a Personal Counselor who was trained in the intervention. In the Mentoring version, youth were paired with a teacher from the school with whom they had some kind of rapport, but spent less time with. Students in the AS version got an average of about 32 sessions of the course of the year, at 2 hours each. Students in the mentoring version got about 25. But the real difference was in the time spent: the Mentoring version meetings averaged a paltry 12 minutes in length. It’s hard to imagine many impacts coming from a mentoring relationship with an average of 12 minutes per conversation (although some interventions in the human services realm can be successful with a frequent-but-brief check-in format). In fact, mentors in the Mentoring version spent almost as much time meeting with their program consultants to discuss the intervention (262 minutes on average) as they did meeting with the recipients of the services (305 minutes on average)!
  • Additionally, one wonders if meeting with teachers who were already familiar to the mentees in the Mentoring version may have kept youth from feeling like this program was an exciting new thing. Mentees may have taken the program more seriously if they were matched with this new person from outside the school (as in the AS version) instead of having these short meetings with a teacher they already knew.
  • The Mentoring version also eliminated the group time that offered peer engagement and fun activities in the AS version. One can imagine that these peer group interactions could reinforce the messages and strategies of the intervention while also allowing for some fun and deeper engagement with staff.
  • And not surprisingly, the AS version also offered more chances for parents to learn about the program and reinforce the strategies of the program at home.

None of this is to say that CHP shouldn’t have tried the Mentoring version of the program. But the reality is that the AS version was rated as having an 85% fidelity to the intervention (meaning youth got 85% of what they were supposed to) as opposed to the 80% for the mentoring version. So in spite of the fact that the AS version was seen as more onerous for youth to participate in, those students actually got more total intervention, delivered more completely, in spite of those challenges. What this means for practitioners is that they should consider trying different versions of an intervention and test them to see what the optimal delivery might look like, as was done here. The study authors ultimately conclude that the best version of this model is something that builds in the full curriculum and dosage, but does it during the school day to alleviate the scheduling challenges for families. So in the battle of ideal vs realistic, the best path forward is trial and error and tweaking until a program gets it right.

3) A “training” intervention might be particularly appropriate for ADHD youth

The authors of the study of CHP really emphasize that an intervention focused on training ADHD youth in new strategies, techniques, and skills to help offset the impact of their condition on learning. These training interventions differ from behavior management interventions that often try to achieve the same goals by incentivizing certain behaviors rather than approaching them as a new skill to learn and develop. But, the authors note, “our results suggest that making long-term changes in adolescents’ behavior using training interventions may take considerable time and coached practice”⎯something that the AS version of the program barely provided and the Mentoring version didn’t offer at all, really. In fact, one wonders what a multi-year version of CHP might be able to achieve, especially if it was able to offer a special “mentoring” relationship with a teacher in the building over multiple years to keep reinforcing the lessons of the curriculum and offering ongoing coaching over time. So if your program wants to improve outcomes for ADHD youth, plan on it being something that takes considerable time and effort. Which is what those youth may need and certainly deserve from the caring adults around them.


For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources for Mentoring Programs" section of the National Mentoring Resource Center site.

Evidence Rating: No Effects - One study 

Date: This profile was posted on November 07, 2016


Program Summary

This is a school-based intervention designed to help students with attention deficit hyperactivity disorder (ADHD) develop, practice, and generalize academic and social skills by using volunteer mentors to deliver skills training to students. This program is rated as No Effects. Academic functioning and parent/teacher ratings of student behavior reflecting ADHD symptoms did not differ significantly for youths in the intervention group, compared with the control group.

You can read the full review on CrimeSolutions.gov.

Wednesday, 29 November 2017 10:51

Check & Connect

Evidence Rating: No Effects - One study

Date: This profile was posted on November 08, 2017


Program Summary

This is a school-based, structured mentoring program designed to reduce school absences and promote student engagement. This program is rated No Effects. One study found students in the program had statistically significant fewer days absent and more days in school. However, program students also had statistically significant lower math scores. There were no other statistically significant differences in outcomes. A second study also found no statistically significant differences.

You can read the full review on CrimeSolutions.gov.

Wednesday, 29 November 2017 10:47

Check & Connect

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the CrimeSolutions.gov website.


In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “No effects” (that is, a program that has strong evidence that it did not achieve justice-related goals).

1. Flexibility in implementation can be a blessing and, perhaps, a curse.

One of the real conundrums mentoring programs face is how they can both build on and implement research-based “effective” practices and program models while also allowing for enough flexibility to customize an intervention or specific practice for local context or needs. It can be a challenge to take something that worked in one place and apply it to a new population of youth, a new city or school, or to vary up aspects of the work to match the availability of local resources. There is a whole body of research devoted to these questions of implementation science: What components of a program are critical to keep as is and which can be tweaked or even discarded? Did something that worked well for one group of kids work for a different one? Are there contextual factors that doom some efforts before they start? When is a model no longer a model?

Check & Connect is a program that has been thoughtfully developed and that has garnered considerable interests from education institutions and nonprofits around the country looking for a solution to issues of school truancy and disengagement. Both of the evaluation reports discussed in the Crime Solutions review mention the many previous implementations of Check & Connect around the country, reviewing both the findings of those prior efforts and the role those findings played in the decision to implement that specific model in these new settings.

But what is striking about the two implementations of Check & Connect in this review is how different they are, despite being essentially the same program model. The study by Guryan and colleagues (Guryan, et al., 2017) focuses on a test of the program serving youth grades K-8. The average student in their cohorts was around eight and a half years old, likely a 3rd grader. The other study, Heppen et al., 2017) tested Check & Connect with students starting in grade 10, meaning that they received support from their mentor in their sophomore and junior years of high school. Paradoxically, the authors of the Heppen study conclude that the Check & Connect model probably works best for younger students that are not already so credit-deficient in 10th grade while the Guryan study was pretty clear that the strongest outcomes were for the older students who participated in 7th and 8th grade. It was far less impactful for younger elementary students. This does not mean that these two studies together have inadvertently found that the sweet spot for this program model is middle school students, but it does highlight just how much results can differ from one program model across the ages of youth served.

The differences in implementation across the two studies goes deeper than just the ages of the youth. In the Guryan study, parent engagement is described as one of the four cornerstones of the Check & Connect intervention and, indeed, mentors contacted parents or guardians and average of twice a month. In the Heppen study, the role of parents is scarcely mentioned and their engagement or contact with mentors isn’t reported⎯their role seems limited to being involved in more intensive interventions “when necessary” for some students (p.4). The mentors in Guryan were employees of a community-based nonprofit hired to work in the schools, whereas in Heppen they appear to have been district employees tasked with serving students in this role. In one study, mentors has a caseload of 30 students, in the other they typically had 50-60⎯double the number of students.

And although both evaluation reports note that referrals to other services (e.g. mental health providers, dedicated tutoring, or broader wraparound services for the family) are absolutely critical for the intervention to succeed, neither report notes the volume or nature of those referrals or their impact on the outcomes studied. The authors do note that earlier implementations of Check & Connect focused on students with physical or learning disabilities, so perhaps those referrals were more important in those contexts. But still there is little in these reports about how much mentors when beyond themselves in offering help to youth and families.

Both reports here note in that, in each instance, Check & Connect was implemented with fidelity, as intended. And both programs had mentors go through the recommended training and used the manuals offered by the developers. But in reading these two evaluation reports, one can’t help but wonder if these two programs were so dissimilar as to leave the question of what this model truly looks like at peak implementation, or who it serves best, up for some debate. This may be a case where the general flexibility of the intervention may encourage its application in contexts where it will be more challenging or when another approach might be more effective. Sometimes rigidity of implementation may be the practitioner’s best friend.

2. How much time together is needed for a relationship to be meaningful and “close”?

One of the most intriguing aspects of the Check & Connect model is that it last for two full years of school for participating students. That’s a long time in the context of most school-based mentoring programs and it also crosses multiple school years, something that can be a challenge for in-school mentoring programs but that research suggests might be critical for sustaining impacts. Twenty-four months of a mentor at school checking in with you and helping problem-solve issues at school and home sure sounds like it would allow for some real bonding and trust-building between mentor and mentee and plenty of depth to their interactions. In fact, the descriptions of the role of mentors in Check & Connect in the studies discussed in this review describe the meaningful social capital that these mentors provide and the critical role that meaningful relationships play in helping students overcome barriers to success in a socialized context like a school.

But even at two years per relationship, there are questions about how much mentoring happened in these implementations of Check & Connect. This has been a topic of debate in the mentoring universe for some time and the studies presented in this review do little to quell that conversation. In the Guryan study, mentors met with students five times a month on average, although some of that was group meetings with other students the mentor was working with. They also noted that the level of engagement varied quite a bit from mentor to mentor. Each contact with students was described as brief and intended to provide a “nudge.” In Heppen, mentors met with their students 37 minutes a month in Year One, 50 minutes a month in Year 2, and 61 minutes a month in Year 3, with around 20 minutes a month over the summers across the two years. Given what most mentoring programs offer, that is very light interpersonal contact⎯MENTOR’s 2016 National Mentoring Program Survey found that approximately 77% of the nation’s programs offer youth at least 90 minutes of mentoring a month (Garringer, M., McQuillin, S., & McDaniel, H., 2017). These mentors simply weren’t spending very much time with their mentees. And they were split for time across 50 to 60 students! That sounds like a situation where every student was getting that “check” but one wonders if these relationships brought the social capital, personal touch, and authenticity of interaction that is assumed in the model itself. One wonders if these relationships were… relationships. Other research has hinted that even in-school mentoring relationships need some opportunities to experience fun and playful one-on-one interactions that support bonding and closeness⎯this has also proven true in other out-of-school mentoring interventions for youth who are struggling with challenging issues. It doesn’t seem clear, from the two implementations of the model provided here, that Check & Connect is allowing for that. This is an issue that has been noted for other narrowly-focused school-based mentoring interventions, but there is also evidence that is fixable when programs place a bit more emphasis on that relational bonding. It’s not clear how that might happen with a caseload of 50+ high school juniors.

3. When designing an intervention for youth with attendance issues, plan for their inherent mobility.

One of the frustrating aspects of these two evaluations of Check & Connect is that the mobility of the students themselves influenced the delivery of services and the ability to achieve the stated goals of the programs. The Heppen study, in particular suffered from students moving within and out of the district. Those students had much worse outcomes than students who stayed either at the same school or in the district. While there are many practical reasons why mentors would have a very hard time continuing to follow up with students who moved out of the district, it is also true that the developers of these programs had plenty of knowledge that this would be an issue. If the goal was to continue to offer the intervention as intended to these students once they enrolled, one wishes that they had a better contingency plan⎯dedicated staffing for students who moved out, increased contact with parents when students moved, some ability to travel in-person to neighboring districts, e-mentoring platforms where the relationships could continue⎯to serve students more effectively. The authors in Heppen conclude that

“for highly mobile students, large caseloads may prevent mentors from being able to track down and spend an adequate amount of time working with all of their students… Mentors in this study attempted to connect with students who left district schools, but they found this process difficult and time consuming and were concerned that it was taking time away from other students on their caseload. In response, the program developers and implementation team decided during the study to prioritize delivery to the nontransfer students.”

In other words, they seem not to have prepared adequately for the mobility of a group of students that by definition tends to have high mobility and, as a result, shifted gears on these students mid-study. Practitioners designing services intended to send students on a path towards course completion and graduation like this may want to prepare for that more in the future and draw clearer lines about how those students will and won’t be served.

4. Think about how your program can access outcome data at multiple layers and long after services have ended.

The Heppen evaluation offers a nice example of how a school-affiliated program that wants to increase school completion and graduation can track those outcomes even if they are set to occur many years after youth have participated in the program. The evaluators made sure to secure access to school district records to see if youth who participated as freshmen, sophomores, and juniors eventually graduated high school or would up getting a diploma even years later if they happened to leave school prematurely. But as noted above, their study had quite a bit of attrition as youth moved across schools and even out of the district altogether.

To address those circumstances, they went up a layer in the education data system, accessing student records at the state level. Many states have become particularly skilled in collection and sharing student data for research purposes through statewide longitudinal data systems. Programs and evaluators are encouraged to learn what these systems in their areas can provide and use that information to help determine the true impact of programs over time and when students move away (but are still within the state).


For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources for Mentoring Programs" section of the National Mentoring Resource Center site. 

Wednesday, 02 May 2018 11:22

Check & Connect Plus Truancy Board (C&C+TB)

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the CrimeSolutions.gov website.


In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “Promising” (that is, a program that shows some evidence that it achieves justice-related goals when implemented with fidelity).

Of all the reviews of mentoring programs done for Crime Solutions, this variation on Check & Connect is one of only two that have received different ratings of effectiveness based on variations of the program (the Brief Instrumental School-Based Mentoring Program, which has undergone several improvement cycles over the years, being the other). Clearly there are things about this version of the Check & Connect model that allowed it to be more successful around the core program goal of improving graduation rates for truant students. So what was so different about this version of the program?

1. The partnership between the school and truancy courts seems powerful.

The biggest difference in this version of the Check & Connect model is the direct partnership with the community truancy board, a group of community leaders, school officials, and representatives of juvenile courts. Other implementations of the Check & Connect model have been focused solely on school personnel to develop and lead the program, whereas this variation adds the full weight of the juvenile court system to the effort.

Mentees in this version of the program begin their involvement by getting a summons from the court regarding their truancy, in which they are asked to appear before the truancy board with their parents or guardians. This meeting is described in the associated journal article as “5–10 board members sit behind a horseshoe-shaped table and the student and family are seated facing them…they seek to convey a helpful and collaborative attitude in seeking out with the family the obstacles that make school attendance challenging. Following a discussion of the state’s truancy laws and the legal consequences of continued truant behavior, these issues are identified and discussed over the course of a 10- to 20-minute question-and-answer period.”

This is clearly a more intense beginning to program involvement than a typical Check & Connect program, which often starts with a meeting with the very same school personnel who have not been able to support a youth’s consistent attendance already. In this version, youth and, perhaps crucially, their parents are approached by an external group with some legal authority over the circumstances. This is likely to be a more jarring experience, especially for parents, and this aspect of the program alone might spur a more rigorous attempt by parents to address barriers at home that may be influencing their child’s attendance. These meetings also result in a contract between the family and the board that details specific steps they must take to improve attendance and be in compliance with what is, essentially, a court order.

Once they have completed this appearance, the student then begins meeting with the school mentor to get the more traditional “check” and “connections” of the model. But in this instance, the mentor is not a school employee or a community volunteer. They are an employee of the court, once again representing that system in partnership with the school. Placing this type of juvenile court role literally in the school may send a powerful message to youth and families. It also, as the authors note, builds on research that has shown the value of more holistic interventions around truancy and illustrates some of the power gained by forging this school-juvenile court partnership.

The services offered by this mentor closely resemble other versions of Check & Connect: tracking of school data and progress toward graduation; monthly meetings for low-risk students, bi-weekly for more higher-risk students; an emphasis on the relationship and rapport; referrals to other services if needed. But those interactions just might have a bit more heft by placing a representative of the courts in that role. Unfortunately, this study did not test a variation on this that included truancy board but then used school staff or community members as mentors, so it is unclear whether it is the board or the in-school board representative (or a combination of both) that is driving the results here. But given the results, clearly adding the truancy board as a partner in this model produced good results.

2. But there are risks in that approach…

This study involved comparing the youth at one high school in Spokane, WA, with truant youth at three other are high schools using a matched comparison design. The one school received this embedded court representative in the “mentor” role while the others did “business as usual.” It is highly likely that having that court representative embedded in the school as part of this program had an impact on school culture at some level. They had an office in the building, met frequently with students there and, while the article doesn’t directly address this, likely met with teachers, administrators, and other school staff in the course of their work. Adding someone in this role would likely have a ripple effect beyond just their own actions, perhaps spurring more collaboration between teachers and families, or influencing other aspects of the school, such as the work of the counselor or fostering more check-ins with students by other adults.

But there is also a huge risk in this model because this one person was the mentor for every single youth in the program. We know that not every mentoring relationship will “click” and that sometimes rematching can help a student find someone who is a better fit for them (the work of Amanda Bayer and colleagues1 in this area has highlighted the importance of compatibility in school-based matches and how that may influence both relationship closeness and program outcomes). This version of the program places the full weight of all of the program’s mentoring work on the shoulders of one person. Undoubtedly, this person might not connect well with all of the students they must meet with and it’s likely that they won’t have the close, mutually-rewarding relationship we commonly associate with mentoring outcomes with such a wide variety of students. This role would require a truly exceptional individual who could offer their mentoring in a variety of flexible styles and adapt their approach to the needs and personalities of each mentee.

While that clearly worked well enough in this instance to generate positive results, it does beg a question about the scalability of this approach and concern about instances where an embedded, court-provided employee happens to not be a great “fit” at a school or the students with whom they work. While the emphasis on the “check” portion of the intervention might still be helpful in those situations, the lack of what might be considered truly authentic mentoring might make positive outcomes hard to come by. So practitioners intrigued by this approach may not want to risk putting all their eggs in the basket of one individual mentor.

3. Replicating something similar in other program settings.

Clearly there is power here in this type of school-family-juvenile court partnership. Not every school is well positioned to make this type of partnership happen. But they might be able to recreate some of the magic here by:

  • Developing a truancy “commission” or some other entity comprised of community leaders, law enforcement, youth development programs, and school staff that could mimic the role that the truancy board played here. In thinking about the many efforts happening around the country to use mentoring to combat chronic absenteeism, it seems as though starting that relationship with a formal meeting with the youth, parents, and a group of caring individuals who collectively bring some authority to the proceedings might jumpstart this work in a way that simply providing a mentor would not. So even if a school or program can’t forge a true integration with the juvenile courts as was done here, they might still be able to create some kind of formal kick-off that carries similar weight.

  • Embedding an “external” mentor (or more) in the school. As noted above, making one person responsible for all the mentoring may not be the best approach, but there is potentially power in having the people doing the “checking and connecting” be external to the school staff. They can bring different resources to the work, look at issues from an outsider perspective, and advocate for the youth in ways that might be precluded for school personnel. And, as noted earlier, bringing external partners into this work by giving them physical space and a role in the building may also have an influence on school culture and the behavior of school staff.

4. A lack of information about mechanisms of change.

Frequently in these “insights” pieces, we lament aspects of study design or reporting that make it challenging for practitioners to learn more about how to replicate results. This program’s evaluation is no exception, as several critical pieces of information are underdiscussed or absent from the article we have to work with:

  • Although the program did a comprehensive needs assessment using a standardized instrument when youth entered the program, there did not seem to be an attempt to see if the students had improved in any of these measures (or at least the ones that might have been changed in a pre-post sense by the program). So while they assessed youth on aggression-defiance, depression-anxiety, substance abuse, peer deviance, family environment, and school engagement, we don’t know if the program’s mentoring changed any of these things for the better. In fact, there is very little in the report about the mechanisms of change that would lead to improved attendance and, eventually, graduation.

  • We also lack information about the delivery of the program. The article contains little information about the frequency and duration of check-ins beyond what is stated as the program ideal. The article also offers little information about the steps parents took in response to those board-provided action plans, the amount and types of referrals to other services, or the types of barriers that the mentor worked to overcome. So while we know that the program seemed to work in terms of improving graduation, we have to “read the tea leaves” a bit to try and figure out exactly what was provided to youth, how that might have influenced their attitudes and behaviors, and the exact program components that drove outcomes.

In fact, this evaluation is one of the more cautious program investigations we’ve examined through the work of the NMRC. Rather than looking at a variety of youth outcomes or digging deep into what drove program changes, this evaluation looked at one outcome, graduation rates, and that’s it. In some ways, this is a safer approach if the goal is to boil down a program’s effectiveness to just its core outcome. In fact, there are many programs in these Crime Solutions reviews that had positive impacts on some outcomes, but were rated as “no effects” because the bulk of their outcome areas showed little difference between treatment and control. They were, in essence, perhaps done in by their own attempt to look at many outcomes, rather than hanging their hat on one.

But the evaluation here is so focused as to arguably be of limited use to practitioners beyond what we have covered here. Clearly there appears to be power in using truancy boards and court-appointed staff embedded in schools to do this kind of Check & Connect style program. But it would be nice if we understood a bit more from the study about why it appears to have worked, how it worked, and ways that it could work even better. Perhaps future iterations of this successful model will illuminate these issues and allow this work to grow to scale.


1  Bayer, A., Grossman, J. B., & DuBois, D. L. (2015) Using volunteer mentors to improve the academic outcomes of underserved students: The role of relationships. Journal of Community Psychology, 43(4), 408–429.


For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources for Mentoring Programs" section of the National Mentoring Resource Center site. 

Wednesday, 02 May 2018 11:38

Check & Connect Plus Truancy Board (C&C+TB)

Evidence Rating: Promising - One study 

Date: This profile was posted on May 01, 2018


Program Summary

This is a school-based program that integrates a case-management framework for providing social support to truant youth. The goals of the program are to improve school attendance and renew progress toward graduation. This program is rated Promising. Students in the intervention group were more likely to have graduated and less likely to have dropped out than students in the comparison group.

You can read the full review on CrimeSolutions.gov.

Page 2 of 7

Request no-cost help for your program

Advanced Search