Brief Instrumental School-Based Mentoring Program – Revised

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read this program's full review on the CrimeSolutions.gov website. 


In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “Promising” (that is, a program that has shown some evidence that it achieves justice-related and other goals when implemented with fidelity).

Update: One of the nice features of the Crime Solutions rating process and listings of reviewed programs is that these reviews are subject to change if a program undergoes additional evaluations over time. Subsequent evaluations will often confirm that a program model is effective, or perhaps even indicate that it can be applied more broadly or with even greater impact under certain conditions. Other times, subsequent evaluations can give contradictory results, throwing the efficacy demonstrated in prior studies into doubt. In the case of the Brief Instrumental School-Based Mentoring Program, the revised version of which had risen to the level of “Promising” based on studies from several years ago, it remains at the “Promising” designation, although the newest study did have decidedly mixed findings.

It is worth noting that in the most recent evaluation of the program, there were positive estimated impacts on the youth participants: significant positive estimated impacts on school behavioral infractions, math grades, and students’ reports of emotional symptoms and school problems relative to their peers. But this program has shown both positive and null effects on a number of outcomes and across multiple studies. What is a practitioner to make of a program with a bit of a “mixed bag” when looking at effectiveness? We unpack a few of the reasons for these mixed results here and emphasize things that practitioners may want to keep in mind when designing and evaluating their programs. The original Insights for Practitioners from 2017 is also included below as it still offers plenty of good food for thought that mentoring professionals will want to consider.

1. Beware the perils of looking for too many outcomes, too often.

One of the great things about this program model is that the developer, Dr. Sam McQuillin of the University of South Carolina, is constantly tinkering with the program, making small-but-meaningful tweaks to the program over time in an effort to improve it. In fact, you’ll notice below in the original version of this Insights that we praised the program for continually tweaking and testing the program in a continuous improvement mindset. It was this process that got the revised version of this program to a “Promising” rating after the original version of it was rated as “No Effects.”

But there is risk every time you tweak a program’s delivery or evaluate its outcomes. There is also risk is looking for a wide variety of outcomes that, while relevant to the work of the program, may not be of critical importance to determining if the program is achieving its mission or not. Both of these factors played a role in the mixed results of the new study. This latest evaluation was an attempt to try some new aspects of the training and serve youth with the most need in the participating schools. McQuillin and McDaniel (2020) noted that this iteration of the program sought to serve the “middle school-aged adolescents who displayed the highest levels of school-reported disciplinary infractions,” while also studying the effectiveness and ease of delivery of the motivational interviewing-based activity curriculum. Both the population-specific effectiveness and ease of “delivery” of the intervention are good things to test out and any opportunity to learn more about the program is likely beneficial. But positive outcomes aren’t always a guarantee, even for a program that has produced them in the past. And every fresh look is a new chance to “fail.” Or at least not get an “A.”

This latest evaluation of the program did find a number of positive outcomes for participating youth, as noted above. But it also failed to find them in a number of other categories: grades other than math and teacher-reported outcomes like behavioral problems and academic competence. The other evaluation recently conducted that informed this updated review also failed to find significant positive effects for youth in the program compared to their unmentored peers. The Crime Solutions rating system is a high bar to clear for programs, and with good reason: We must ensure that public investment in prevention and intervention efforts actually do reduce crime and delinquency and related factors. But it is also a rating system that disincentivizes research into program models, or innovations that might improve a model that has already been effective, assuming the goal is to have one’s program found to be effective, because every evaluation undertaken and every outcome examined, winds up being another opportunity to get an unexpected result that can  lead to a less desirable classification of the overall evidence that  contributes to a public perception of the program being ineffective (or at least less effective than previously suggested). There are many programs in Crime Solutions that achieved nothing of value and should be noted as such. This program avoided seeing its rating downgraded, but in general, lumping programs that have demonstrated some positive impact in with those others seems potentially misleading at best and inaccurate at worst. Thankfully that was avoided here, but this kind of downside to looking at too many outcomes at once, or continually engaging in high-quality evaluations, should be noted by programs and evaluators alike as it can lead to some negative public consequences.

2. College students might be a good fit as mentors if your program wants to integrate a complicated new curriculum or emphasize fidelity.

One of the interesting aspects of the implementation of this program is the use of college students as the mentors. This program utilizes a curriculum and set of activities based on Motivational interviewing (MI)—a strategy for changing how people think and behave without ever, in essence asking or “prescribing” them to do so directly— that is notoriously challenging to master, even for experienced therapists and social workers. The mentors in this instance were college students, many of whom were psychology or related majors who might have an inherent interest in learning about MI. One of the main goals in this study was to see if the mentors liked and could use the materials and MI strategies effectively. In the McQuillin and McDaniel (2020) study, these college students reported that, for the most part, they found the material and techniques of the program to be acceptable, appropriate, and understandable. But one wonders if mentors who come from all walks of life and who might not be as interested in a concept like MI would offer similar ratings.

Thankfully, Dr. McQuillin is partnering with MENTOR, a national organization devoted to supporting mentoring programs with training and technical assistance, to pilot test materials based on this program in a wide variety of programmatic settings to see if mentors generally find the materials to be helpful and usable, or if there are subgroups of mentors, such as college students, who might be more effective in their use. The hope is that other mentors can use MI as effectively as college students have in the several trials of this program.

3. Programs should be encouraged to try out new strategies and mentoring techniques!

In spite of some of the challenges around properly quantifying the impact that this program has demonstrated on participating youth, the reality is that many mentoring programs are trying to find ways to make their mentors more effective and are turning to strategies like MI to try and boost their results. The most recent study of this program offers several great ideas to keep in mind. First is the importance of testing feasibility, understanding, comfort, and use when implementing a new approach or concept, as this program did. The evaluation report features a great chart showing all of the different things they asked mentors about: Were the materials easy to understand? Were you enthusiastic about using them? Did they seem to fit the mission of the program? Did they fit your style as a mentor? Did you have challenges trying this out in real life? Did you need more training and support? By examining all those aspects, the evaluation was able to determine that, at least in this program setting, perception of the training and use of the materials was positive. No program can do great work if the mentors haven’t bough in to the process and aren’t happy with the tools.

But there were critiques from the mentors. Some felt like the program needed parent “engagement” if not outright direct involvement if the MI work was to be successful. Perhaps more importantly, the mentors wanted more training and practice on the use of the MI strategies. They understood the materials. They “got” the concepts and were willing to use them when meeting with mentees. But even these really bright college students still felt a need for more practice and training, something that isn’t surprising given the aforementioned challenges of MI use.

I am sure the developers of this program are working on other studies that will tweak some things and test them to see if they can solve the “more training” issues. That’s what a continuous improvement approach looks like. Here’s hoping they don’t get inadvertently punished by policymakers once those studies are done for having the guts to keep looking and learning.

Original Insights for Practitioners posted in 2017:

In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “Promising” (that is, a program that shows some evidence that it achieves justice-related and other goals when implemented with fidelity).

1. The value of testing program improvements over time.

One of the best pieces of advice for youth mentoring programs is that they should always be tinkering with their approach and testing the results to see if aspects of the program can be improved. This can be as simple as adding fresh content to mentor training, beefing up match support check-ins, or directing mentor recruitment toward specific types of individuals because they might be a better fit for the role than previous mentors. Any good program will constantly be looking for subtle ways to make what they do better. And in the case of the Brief Instrumental School-Based Mentoring Program, we have a great example of the type of payoff that can happen with a program takes this kind of iterative approach to the development of the mentoring services over time.

An earlier version of this program was reviewed for Crime Solutions and found that have “No Effects,” meaning that it did not produce meaningful outcomes in many of the main categories in which it was trying to support youth. But as noted in the “Insights for Practitioners” for that iteration of the program, this is a program that has constantly evaluated and revised in a continuous improvement approach. In fact, the earliest version of the program showed evidence that it may have actually produced some harmful effects for youth. So the developers improved the training and content of the program and evaluated it again, finding that they had eliminated the harmful impacts but not quite reached their goal of meaningful positive outcomes.

So they revamped and tried again, once more emphasizing rigorous evaluation to see if their improvements worked. This version of the program added a host of improvements: the provision of a mentee manual, revised mentor training and supervision, more choice for mentors on match activities, and e-training and support. And these changes again paid off in a measurable way.

Figure 1

This figure, taken from the report of the evaluation conducted by McQuillin and Lyons, shows the notable improvements that are evident across the three versions of the program. In each of the outcome areas listed along the X axis, we can see that this latest version of the program outperformed the earlier versions, sometimes markedly, with several outcomes having effect size (level of impact) that are consistently well beyond the average of around .20 found in the last comprehensive meta-analysis of mentoring program effectiveness. This Figure is a powerful example of why we need to allow mentoring programs to have some missteps along the way so that they opportunity and time to try new ideas and improve weaker aspects of what they are trying to do. Unfortunately, many programs lose support and funding after an evaluation that shows poor or even less-than-desired results. Others may develop internal cultures that unfortunately do not support questioning current practices and thus miss out on opportunities to improve through innovation and evaluation. Policymakers often view programs as “working” or not, when the reality is that most programs need time to work out the kinks and a chance to find better ways of delivering their services. The story of this program really highlights why practitioners need to be constantly trying to improve what they do and why program stakeholders need to exhibit some patience as the program works toward its “best” design.

2. The more complicated the mentors’ task, the more supervision and support they need.

Among the many improvements that were made in this version of the Brief Instrumental School-Based Mentoring Program, perhaps none is more notable than the bolstering of the supervision and support offered to mentors. Although the evaluators of the program did not test to see if this enhanced supervision was directly responsible for the improvements in the program’s outcomes, it is quite plausible that this played a meaningful role.

For this version of the program, mentors were not only provided with considerable up-front training but also a wealth of support and guidance in delivering the curriculum of the program and integrating it into the relationship that each mentor had with his or her mentee. Keep in mind that this program asks mentors to engage in a number of highly particular conversations and activities with their mentees that build in a sequential fashion and provide youth with very specific skills and ways of thinking about themselves and their academics. Mentors are asked to engage in motivational interviewing, apply cognitive dissonance theory, and teach academic enabling skills. Doing these things well requires adhering closely to a set of phrases, behaviors, and talking points and the developers of this program didn’t want to simply do a training and leave that to chance (as they did in the past).

Before each session, mentors would meet with a site supervisor who would review the curriculum for that particular session, reinforce keys to delivering the content well, and answer mentor questions. After each session, mentors again met with site supervisors to go over their checklists of actions to see if they had covered all of the content they were supposed to during the session. Mentors were also encouraged to engage additional phone-based support with their supervisor as needed and were provided with a manual that they could refer to at any time to get more familiar with the content and practice the types of key messages they were supposed to be delivering to mentees.

The type of intensive “just-in-time” mentor training and supervision offered by this program is likely beyond what most school-based mentoring programs can offer their mentors with their current resources (it’s also worth noting that most programs are simply not asking mentors to engage in activities that are this tightly controlled and specific). Just about any program could, however, borrow the concept of pre- and post-match check-ins with a site coordinator as a way of boosting program quality. These meetings could be brief but critically important to reviewing what the match is focusing on, sharing information about what the mentee has been experiencing recently (at school or at home), and reinforcing key messages or talking points that have the potential to either help make the relationship stronger or offer more targeted instrumental support. As the evaluators of this program note, “One persistent problem in SBM intervention research is the confusion surrounding what occurs in mentoring interactions and relationships,” which results in not only challenges in improving the program but also in helping others to replicate proven mentor strategies in other programs. Using the kind of rigorous pre-post mentor support used by this program might allow programs to understand what mentors and mentees are doing when they meet and make sure that important aspects of the program are delivered with fidelity, even if those important aspects are not as complicated as those in this particular program.

3. Remember to borrow evidence-supported practices and ideas from other youth development services.

We have frequently encouraged mentoring practitioners in these “Insights” pieces to borrow ideas and tools from other youth serving interventions, both mentoring and beyond. The Brief Instrumental School-Based Mentoring Program-Revised offers a great example of this in their sourcing of the “academic enablers” used by mentors. Rather than inventing a new set of tools and techniques, the program simply borrowed and adapted the materials from an intervention for children with ADHD that used coaches to help improve academic skills. The materials had already been used with good results in a short-term intervention, meaning they would fit into the timeline of the mentoring program. Their successful use with ADHD students previously also meant that they would be a likely good fit here with any mentees that had learning disabilities. So rather than reinventing a suite of activities to teach mentees about agenda keeping, planning, and organization skills, the program simply adapted something that already existed and fit their school-based model. This type of adaptation can save staff time and resources, while increasing the odds that the program is doing something that will be effective.

4. Don’t forget that mentoring programs are relationship programs.

The Insights about the earlier version of this program noted that there may have been challenges in developing relationship closeness given the program’s brief duration (8 sessions) and the highly scripted nature of the interactions (lots of curriculum delivery, not a lot of fun): “One wonders what the results might have been if this program had emphasized the ‘brief’ and/or the ‘instrumental’ just a bit less or otherwise figured out a way to give the participants more time to build a stronger relationship rather than an ‘alliance’.”

Well, this iteration of the program did exactly that by emphasizing more play, games, and freedom in session activities (provided that the core intervention was completed). As explained by the researchers: “the SBM program described in this study includes a variety of activities designed to promote a close relationship between the mentor and mentee because the quality of the mentoring relationship is theorized to be critical for helping mentees achieve their goals. Within an SBM context, brief effective instrumental models of mentoring that also include activities to develop close relationships may be an ideal ‘launch-pad’ for SBM programs with a developmental model to extend the period of the brief mentoring beyond the brief of mentoring.”

This serves as another example of this program learning from a prior attempt and doing something better (in this case strengthening the relationships themselves). It also provides some food for thought for any school-based mentoring program. Given that many school-based programs are brief and focused on fairly instrumental pursuits, how can these programs not only strengthen the relationships but also keep those relationships going once they get strong? It seems a shame to have a program forge new and meaningful relationships, only to use them in service of something that is, by design, short term. School-based programs should consider partnering with other programs or find other ways of using a “brief instrumental” program as a testing ground for relationships that can transfer to another program or setting if they “take root.” The developers of this program note that the ideal goal of school-based mentoring might be to “increase the immediate impact of mentoring on student outcomes and promote long-term mentoring relationships desired by developmental models of mentoring.”

It is unclear how many of this program’s brief matches (if any) went on to have longer-term engagement outside of the program. But all school-based programs should be asking “How can we keep these relationships going once they have tackled the targeting things we ask them to achieve?” Without exploring that, we may minimize the value of school-based mentoring and deny mentees and mentors the opportunity to grow something brief into something quite monumental.


For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resourcessection of the National Mentoring Resource Center site.

Request no-cost help for your program

Advanced Search