Displaying items by tag: Prevention of youth risk behavior

MAY 23, 2017
BY: JODIE MARTIN, MENTOR: THE NATIONAL MENTORING PARTNERSHIP

Over the past several weeks, many youth development professionals have become aware of the Netflix series, 13 Reasons Why, which revolves around the suicide of a female high school student, Hannah Baker. Due to its fast-spreading popularity, whether or not you have seen it, you may have received questions from parents and youth about the themes that are addressed in the series. These themes, many of which are uncomfortable and controversial, are nonetheless important to talk about, especially with students of a similar age to the characters depicted on the show. Conversations about mental health and suicide can reduce the stigma behind these experiences to allow those who are suffering to know that they are not alone and can get help. However, because of the complexity of these topics and the ways in which they are depicted in the series, parents/guardians, school administrators, and youth development professionals should be aware of the questions and concerns young people may have after watching it, so they can be prepared for the important discussions the series may spark.

For those who are unfamiliar with the series’ premise, the story unfolds through a series of pre-recorded tapes on which the main character, Hannah Baker, describes thirteen reasons that led up to her suicide. Through the episodes the audience finds out that Hannah has had many rumors spread about her, and that eventually she was the target of bullying, social isolation, and sexual abuse. Below, we offer some information about the complex themes addressed in the show, and where you can go for more resources. As we know many parents and youth development professionals have been addressing these topics with youth, we invite you to post other resources and tips you have found useful in the comments section below.

Published in NMRC Blog
Friday, 19 August 2016 10:49

Across Ages

Evidence Rating: Promising - One study

Date: This profile was posted on May 28, 2013


Program Summary

A mentoring initiative designed to delay or reduce substance use of at-risk middle school youth through a comprehensive intergenerational approach. This program is rated Promising. The program significantly reduced school absences and had a positive effect on measures of youths’ reactions to situations involving drug use and attitudes toward school, the future, and elders. However, the program did not impact youths’ frequency of substance use or well-being.

You can read the full review on CrimeSolutions.gov.

Evidence Rating: Effective - More than one study

Date: This profile was posted on November 19, 2013


Program Summary

A strengths-based, advocacy oriented program that diverts arrested youth from formal processing in the juvenile justice system and provides them community-based services. This program is rated Effective. The program was associated with a significant reduction in the rates of official delinquency of participating juveniles as compared to juveniles formally processed in the system. However, the program did not significantly affect youths’ self-reported delinquency.

You can read the full review on CrimeSolutions.gov.

Evidence Rating: Effective - One study

Date: This profile was posted on June 04, 2011


Program Summary

Offers one-to-one mentoring in a community setting for at-risk youth between the ages of 6 and 18. This program is rated Effective. It was associated with a significant reduction in initiating drug and alcohol use and antisocial behavior among mentored youth. Also, mentored youth had significantly better relationships with parents and emotional support among peers. The program, however, did not have a significant effect on youths’ academic performance (grades and absences) or self-worth.

You can read the full review on CrimeSolutions.gov.

Evidence Rating: Promising - One study

Date: This profile was posted on August 8, 2017


Program Summary

This is a school-based, one-on-one mentoring program designed to improve academic performance and life satisfaction, and decrease absences and behavioral infractions among middle school students. The program is rated Promising. The intervention group had significantly fewer unexcused absences, and significantly higher math and English grades and self-reported levels of life satisfaction. However, there were no effects on school-reported behavioral infractions or grades for science or history.

You can read the full review on CrimeSolutions.gov.

Thursday, 22 January 2015 08:47

Brief Instrumental School-Based Mentoring Program

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the new full review on the Crime Solutions website. This review has an updated version of the program based on a fresh evaluation of the program. You can read the previous version’s insights here.


There are several notable features of the Brief Instrumental School-Based Mentoring Program that others providing mentoring in a school setting can learn from and think about building on. One of the most interesting conceptual ideas behind this program is that it builds on the theory, developed by researchers such as Tim Cavell and others, that a mentoring relationship can be viewed as a “context” in which other meaningful work occurs, rather than the relationship being the sum and the end of what is provided to a child. This idea speculates that perhaps it’s not the close and long-term bond between a mentor and mentee that matters as much as it’s the transformative power of the activities they engage in and the skills that get developed within the context of the relationship. This program very intentionally built on this idea, placing more emphasis on the sequential set of activities that matches did over time than on the fostering of a deep and long-lasting relationship.

This approach may have special appeal for schools or other institutional settings where building a meaningful long-term relationship can be difficult. The chaotic nature of the school environment, the frequent breaks in the school calendar, the relatively short duration of a school year, and youth moves between schools all somewhat preclude the establishment of the type of intense and lasting relationship we typically associate with mentoring. So the developers of this program decided to try something different in a school setting: They intentionally shortened the time that mentors and mentees would spend together and put a lot of thought and effort into developing a meaningful set of sequential activities that would, in theory, provide the youth with meaningful support around their academic achievement, school behavior, and connectedness to school and teachers.

This more instrumental form of mentoring, where purposeful activities drive the interactions between mentor and mentee, holds promise for programs set in schools (for example, a similarly-instrumental program reviewed by the NMRC Research Board was rated as “promising”). There were several program features that brought this more instrumental approach to mentoring to life:

  • The concept of academic enablers. One of the core ideas promoted by the program’s activities was the concept of academic enablers, things like good study habits and organizational skills that research suggests are pretty malleable and that can really boost a young person’s academic performance. This seems to be a good fit for the middle school population served in this program, since middle school is a time when more gets asked of students and those with poor study or organizational skills may begin to struggle with coursework and tests for the first time. The developers of this program built the activities on a framework from other research involving academic coaching, testing the idea that mentors could provide much of the same support.

  • Counseling skills. To facilitate the academic-focused work, and building on the “coaching” concepts that informed program activities, mentors in the program were trained in several strategies that have proven to work in other settings, specifically counseling strategies such as motivational interviewing. This client-centered conversational style helps clients (in this case students) set and achieve goals, primarily through facilitated conversations that follow some basic principles: “Ask, Don’t Tell”, “Avoid Argumentation”, and “Support Progress Toward Goals.” These approaches help the relationship avoid conflict and the feeling that the mentor is being “prescriptive” toward the youth, while making sure that the mentor’s words and wisdom are applied toward realistic goals that the youth has set for himself or herself. Along with the training mentors received in these strategies, they were monitored to ensure that they followed these principles in their meetings with their mentees.

  • An emphasis on Connectedness and Life Satisfaction. The program also tried to boost the feelings of connectedness to schools and teachers, as well as the student’s overall life satisfaction. Research indicates that both of these influence academic performance and engagement with school, so several program activities were designed to support growth in these areas, mostly by helping youth identify and overcome challenges and build skills that would help them experience academic success they could build on.

Given that this program was built on these concepts and principles that had proven to work in other settings, why did it not produce stronger results? The mechanisms for change seem very well conceived, yet the program produced only a limited number of positive effects (improved math grades, reduced disciplinary referrals, small increases in life satisfaction) while failing to “move the needle” on most of the outcomes the program was designed to address.

It turns out that perhaps the closeness and depth of that mentoring relationship matters more than we might think:

  • The mentoring relationship lasted all of eight sessions. The idea behind these relationships was that they would be more of a “working alliance” than an intimate, sustained relationship. And there is no doubt that these matches did bond some and got along well enough to engage in meaningful activities together. But only the first two of the eight sessions focused on relationship building and getting to know each other. After that, the remaining six weeks were focused either on goal setting and building academic enablers or problem solving and giving the youth feedback on their progress. While eight sessions with a mentor can surely be meaningful, one has to wonder just how close these youth felt to their mentors, how much weight they put on their words, and how deeply their messages sunk in. Ostensibly, there is a world of difference between having an adult you work on things with versus having an adult whose words and widsom carry power, whose opinions deeply matter. Unfortunately, the researchers behind this program did not measure relationship closeness. It would have been interesting to see how the participating youth viewed their mentors. One wonders what the results might have been if this program had emphasized the “brief” and/or the “instrumental” just a bit less or otherwise figured out a way to give the participants more time to build a stronger relationship rather than an “alliance.”

  • The mentoring meetings all happened in a designated mentoring room. Although not all matches met at the same time (they happened during youths’ free periods), it seems likely that matches at times were sharing this space with other mentors and mentees. The study doesn’t shed much light onto what the environment of the room was like, but one can imagine it being fairly chaotic and loud and not offering much privacy or room to really bond or discuss a sensitive topic.. School-based programs can really struggle with the question of where matches meet on campus, but it can be especially challenging if multiple matches are meeting in the same space. This could have been another barrier to relationship quality and the subsequent outcomes of the program.

So this winds up being a program with a very promising design, but one that fell short of some expectations in terms of outcomes. It based its design on the notion that the mentoring activities would matter more than the depth of the mentoring relationship. Without measuring relationship quality we don’t know if this assumption was necessarily proven wrong. Yet, as discussed above, there is reason to suspect that the activities could have benefitted from a stronger underpinning of a meaningful relationship between mentor and mentee.

The good news is that the researchers behind this program are continuing to adjust accordingly and are currently trying a next iteration of the program that might produce stronger results. In fact, the version of the program reviewed here was actually built on a previous program that had been evaluated with worse results. So with each iteration of the program, they are evaluating and refining the model, adding carefully selected elements and concepts designed to fix the gaps revealed in their research. This kind of evaluate-refine-reevaluate approach is commendable and something that all programs should aim for at some level. The only way to build a better program is to evaluate its effectiveness, learn where there might be weaknesses, and try again. This instrumental approach to school-based mentoring makes a lot of sense, and the hope is that a future version of this program might find the right balance between academic supports, instrumental activities, and meaningful relationships.

For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resourcessection of the National Mentoring Resource Center site.

Evidence Rating: Promising - One study

Date: This profile was posted on August 08, 2017


Program Summary

This is a school-based, one-on-one mentoring program designed to improve academic performance and life satisfaction, and decrease absences and behavioral infractions among middle school students. The program is rated Promising. The intervention group had significantly fewer unexcused absences, and significantly higher math and English grades and self-reported levels of life satisfaction. However, there were no effects on school-reported behavioral infractions or grades for science or history.

You can read the full review on CrimeSolutions.gov.

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read this program's full review on the CrimeSolutions.gov website. 


In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “Promising” (that is, a program that has shown some evidence that it achieves justice-related and other goals when implemented with fidelity).

Update: One of the nice features of the Crime Solutions rating process and listings of reviewed programs is that these reviews are subject to change if a program undergoes additional evaluations over time. Subsequent evaluations will often confirm that a program model is effective, or perhaps even indicate that it can be applied more broadly or with even greater impact under certain conditions. Other times, subsequent evaluations can give contradictory results, throwing the efficacy demonstrated in prior studies into doubt. In the case of the Brief Instrumental School-Based Mentoring Program, the revised version of which had risen to the level of “Promising” based on studies from several years ago, it remains at the “Promising” designation, although the newest study did have decidedly mixed findings.

It is worth noting that in the most recent evaluation of the program, there were positive estimated impacts on the youth participants: significant positive estimated impacts on school behavioral infractions, math grades, and students’ reports of emotional symptoms and school problems relative to their peers. But this program has shown both positive and null effects on a number of outcomes and across multiple studies. What is a practitioner to make of a program with a bit of a “mixed bag” when looking at effectiveness? We unpack a few of the reasons for these mixed results here and emphasize things that practitioners may want to keep in mind when designing and evaluating their programs. The original Insights for Practitioners from 2017 is also included below as it still offers plenty of good food for thought that mentoring professionals will want to consider.

1. Beware the perils of looking for too many outcomes, too often.

One of the great things about this program model is that the developer, Dr. Sam McQuillin of the University of South Carolina, is constantly tinkering with the program, making small-but-meaningful tweaks to the program over time in an effort to improve it. In fact, you’ll notice below in the original version of this Insights that we praised the program for continually tweaking and testing the program in a continuous improvement mindset. It was this process that got the revised version of this program to a “Promising” rating after the original version of it was rated as “No Effects.”

But there is risk every time you tweak a program’s delivery or evaluate its outcomes. There is also risk is looking for a wide variety of outcomes that, while relevant to the work of the program, may not be of critical importance to determining if the program is achieving its mission or not. Both of these factors played a role in the mixed results of the new study. This latest evaluation was an attempt to try some new aspects of the training and serve youth with the most need in the participating schools. McQuillin and McDaniel (2020) noted that this iteration of the program sought to serve the “middle school-aged adolescents who displayed the highest levels of school-reported disciplinary infractions,” while also studying the effectiveness and ease of delivery of the motivational interviewing-based activity curriculum. Both the population-specific effectiveness and ease of “delivery” of the intervention are good things to test out and any opportunity to learn more about the program is likely beneficial. But positive outcomes aren’t always a guarantee, even for a program that has produced them in the past. And every fresh look is a new chance to “fail.” Or at least not get an “A.”

This latest evaluation of the program did find a number of positive outcomes for participating youth, as noted above. But it also failed to find them in a number of other categories: grades other than math and teacher-reported outcomes like behavioral problems and academic competence. The other evaluation recently conducted that informed this updated review also failed to find significant positive effects for youth in the program compared to their unmentored peers. The Crime Solutions rating system is a high bar to clear for programs, and with good reason: We must ensure that public investment in prevention and intervention efforts actually do reduce crime and delinquency and related factors. But it is also a rating system that disincentivizes research into program models, or innovations that might improve a model that has already been effective, assuming the goal is to have one’s program found to be effective, because every evaluation undertaken and every outcome examined, winds up being another opportunity to get an unexpected result that can  lead to a less desirable classification of the overall evidence that  contributes to a public perception of the program being ineffective (or at least less effective than previously suggested). There are many programs in Crime Solutions that achieved nothing of value and should be noted as such. This program avoided seeing its rating downgraded, but in general, lumping programs that have demonstrated some positive impact in with those others seems potentially misleading at best and inaccurate at worst. Thankfully that was avoided here, but this kind of downside to looking at too many outcomes at once, or continually engaging in high-quality evaluations, should be noted by programs and evaluators alike as it can lead to some negative public consequences.

2. College students might be a good fit as mentors if your program wants to integrate a complicated new curriculum or emphasize fidelity.

One of the interesting aspects of the implementation of this program is the use of college students as the mentors. This program utilizes a curriculum and set of activities based on Motivational interviewing (MI)—a strategy for changing how people think and behave without ever, in essence asking or “prescribing” them to do so directly— that is notoriously challenging to master, even for experienced therapists and social workers. The mentors in this instance were college students, many of whom were psychology or related majors who might have an inherent interest in learning about MI. One of the main goals in this study was to see if the mentors liked and could use the materials and MI strategies effectively. In the McQuillin and McDaniel (2020) study, these college students reported that, for the most part, they found the material and techniques of the program to be acceptable, appropriate, and understandable. But one wonders if mentors who come from all walks of life and who might not be as interested in a concept like MI would offer similar ratings.

Thankfully, Dr. McQuillin is partnering with MENTOR, a national organization devoted to supporting mentoring programs with training and technical assistance, to pilot test materials based on this program in a wide variety of programmatic settings to see if mentors generally find the materials to be helpful and usable, or if there are subgroups of mentors, such as college students, who might be more effective in their use. The hope is that other mentors can use MI as effectively as college students have in the several trials of this program.

3. Programs should be encouraged to try out new strategies and mentoring techniques!

In spite of some of the challenges around properly quantifying the impact that this program has demonstrated on participating youth, the reality is that many mentoring programs are trying to find ways to make their mentors more effective and are turning to strategies like MI to try and boost their results. The most recent study of this program offers several great ideas to keep in mind. First is the importance of testing feasibility, understanding, comfort, and use when implementing a new approach or concept, as this program did. The evaluation report features a great chart showing all of the different things they asked mentors about: Were the materials easy to understand? Were you enthusiastic about using them? Did they seem to fit the mission of the program? Did they fit your style as a mentor? Did you have challenges trying this out in real life? Did you need more training and support? By examining all those aspects, the evaluation was able to determine that, at least in this program setting, perception of the training and use of the materials was positive. No program can do great work if the mentors haven’t bough in to the process and aren’t happy with the tools.

But there were critiques from the mentors. Some felt like the program needed parent “engagement” if not outright direct involvement if the MI work was to be successful. Perhaps more importantly, the mentors wanted more training and practice on the use of the MI strategies. They understood the materials. They “got” the concepts and were willing to use them when meeting with mentees. But even these really bright college students still felt a need for more practice and training, something that isn’t surprising given the aforementioned challenges of MI use.

I am sure the developers of this program are working on other studies that will tweak some things and test them to see if they can solve the “more training” issues. That’s what a continuous improvement approach looks like. Here’s hoping they don’t get inadvertently punished by policymakers once those studies are done for having the guts to keep looking and learning.

Original Insights for Practitioners posted in 2017:

In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “Promising” (that is, a program that shows some evidence that it achieves justice-related and other goals when implemented with fidelity).

1. The value of testing program improvements over time.

One of the best pieces of advice for youth mentoring programs is that they should always be tinkering with their approach and testing the results to see if aspects of the program can be improved. This can be as simple as adding fresh content to mentor training, beefing up match support check-ins, or directing mentor recruitment toward specific types of individuals because they might be a better fit for the role than previous mentors. Any good program will constantly be looking for subtle ways to make what they do better. And in the case of the Brief Instrumental School-Based Mentoring Program, we have a great example of the type of payoff that can happen with a program takes this kind of iterative approach to the development of the mentoring services over time.

An earlier version of this program was reviewed for Crime Solutions and found that have “No Effects,” meaning that it did not produce meaningful outcomes in many of the main categories in which it was trying to support youth. But as noted in the “Insights for Practitioners” for that iteration of the program, this is a program that has constantly evaluated and revised in a continuous improvement approach. In fact, the earliest version of the program showed evidence that it may have actually produced some harmful effects for youth. So the developers improved the training and content of the program and evaluated it again, finding that they had eliminated the harmful impacts but not quite reached their goal of meaningful positive outcomes.

So they revamped and tried again, once more emphasizing rigorous evaluation to see if their improvements worked. This version of the program added a host of improvements: the provision of a mentee manual, revised mentor training and supervision, more choice for mentors on match activities, and e-training and support. And these changes again paid off in a measurable way.

Figure 1

This figure, taken from the report of the evaluation conducted by McQuillin and Lyons, shows the notable improvements that are evident across the three versions of the program. In each of the outcome areas listed along the X axis, we can see that this latest version of the program outperformed the earlier versions, sometimes markedly, with several outcomes having effect size (level of impact) that are consistently well beyond the average of around .20 found in the last comprehensive meta-analysis of mentoring program effectiveness. This Figure is a powerful example of why we need to allow mentoring programs to have some missteps along the way so that they opportunity and time to try new ideas and improve weaker aspects of what they are trying to do. Unfortunately, many programs lose support and funding after an evaluation that shows poor or even less-than-desired results. Others may develop internal cultures that unfortunately do not support questioning current practices and thus miss out on opportunities to improve through innovation and evaluation. Policymakers often view programs as “working” or not, when the reality is that most programs need time to work out the kinks and a chance to find better ways of delivering their services. The story of this program really highlights why practitioners need to be constantly trying to improve what they do and why program stakeholders need to exhibit some patience as the program works toward its “best” design.

2. The more complicated the mentors’ task, the more supervision and support they need.

Among the many improvements that were made in this version of the Brief Instrumental School-Based Mentoring Program, perhaps none is more notable than the bolstering of the supervision and support offered to mentors. Although the evaluators of the program did not test to see if this enhanced supervision was directly responsible for the improvements in the program’s outcomes, it is quite plausible that this played a meaningful role.

For this version of the program, mentors were not only provided with considerable up-front training but also a wealth of support and guidance in delivering the curriculum of the program and integrating it into the relationship that each mentor had with his or her mentee. Keep in mind that this program asks mentors to engage in a number of highly particular conversations and activities with their mentees that build in a sequential fashion and provide youth with very specific skills and ways of thinking about themselves and their academics. Mentors are asked to engage in motivational interviewing, apply cognitive dissonance theory, and teach academic enabling skills. Doing these things well requires adhering closely to a set of phrases, behaviors, and talking points and the developers of this program didn’t want to simply do a training and leave that to chance (as they did in the past).

Before each session, mentors would meet with a site supervisor who would review the curriculum for that particular session, reinforce keys to delivering the content well, and answer mentor questions. After each session, mentors again met with site supervisors to go over their checklists of actions to see if they had covered all of the content they were supposed to during the session. Mentors were also encouraged to engage additional phone-based support with their supervisor as needed and were provided with a manual that they could refer to at any time to get more familiar with the content and practice the types of key messages they were supposed to be delivering to mentees.

The type of intensive “just-in-time” mentor training and supervision offered by this program is likely beyond what most school-based mentoring programs can offer their mentors with their current resources (it’s also worth noting that most programs are simply not asking mentors to engage in activities that are this tightly controlled and specific). Just about any program could, however, borrow the concept of pre- and post-match check-ins with a site coordinator as a way of boosting program quality. These meetings could be brief but critically important to reviewing what the match is focusing on, sharing information about what the mentee has been experiencing recently (at school or at home), and reinforcing key messages or talking points that have the potential to either help make the relationship stronger or offer more targeted instrumental support. As the evaluators of this program note, “One persistent problem in SBM intervention research is the confusion surrounding what occurs in mentoring interactions and relationships,” which results in not only challenges in improving the program but also in helping others to replicate proven mentor strategies in other programs. Using the kind of rigorous pre-post mentor support used by this program might allow programs to understand what mentors and mentees are doing when they meet and make sure that important aspects of the program are delivered with fidelity, even if those important aspects are not as complicated as those in this particular program.

3. Remember to borrow evidence-supported practices and ideas from other youth development services.

We have frequently encouraged mentoring practitioners in these “Insights” pieces to borrow ideas and tools from other youth serving interventions, both mentoring and beyond. The Brief Instrumental School-Based Mentoring Program-Revised offers a great example of this in their sourcing of the “academic enablers” used by mentors. Rather than inventing a new set of tools and techniques, the program simply borrowed and adapted the materials from an intervention for children with ADHD that used coaches to help improve academic skills. The materials had already been used with good results in a short-term intervention, meaning they would fit into the timeline of the mentoring program. Their successful use with ADHD students previously also meant that they would be a likely good fit here with any mentees that had learning disabilities. So rather than reinventing a suite of activities to teach mentees about agenda keeping, planning, and organization skills, the program simply adapted something that already existed and fit their school-based model. This type of adaptation can save staff time and resources, while increasing the odds that the program is doing something that will be effective.

4. Don’t forget that mentoring programs are relationship programs.

The Insights about the earlier version of this program noted that there may have been challenges in developing relationship closeness given the program’s brief duration (8 sessions) and the highly scripted nature of the interactions (lots of curriculum delivery, not a lot of fun): “One wonders what the results might have been if this program had emphasized the ‘brief’ and/or the ‘instrumental’ just a bit less or otherwise figured out a way to give the participants more time to build a stronger relationship rather than an ‘alliance’.”

Well, this iteration of the program did exactly that by emphasizing more play, games, and freedom in session activities (provided that the core intervention was completed). As explained by the researchers: “the SBM program described in this study includes a variety of activities designed to promote a close relationship between the mentor and mentee because the quality of the mentoring relationship is theorized to be critical for helping mentees achieve their goals. Within an SBM context, brief effective instrumental models of mentoring that also include activities to develop close relationships may be an ideal ‘launch-pad’ for SBM programs with a developmental model to extend the period of the brief mentoring beyond the brief of mentoring.”

This serves as another example of this program learning from a prior attempt and doing something better (in this case strengthening the relationships themselves). It also provides some food for thought for any school-based mentoring program. Given that many school-based programs are brief and focused on fairly instrumental pursuits, how can these programs not only strengthen the relationships but also keep those relationships going once they get strong? It seems a shame to have a program forge new and meaningful relationships, only to use them in service of something that is, by design, short term. School-based programs should consider partnering with other programs or find other ways of using a “brief instrumental” program as a testing ground for relationships that can transfer to another program or setting if they “take root.” The developers of this program note that the ideal goal of school-based mentoring might be to “increase the immediate impact of mentoring on student outcomes and promote long-term mentoring relationships desired by developmental models of mentoring.”

It is unclear how many of this program’s brief matches (if any) went on to have longer-term engagement outside of the program. But all school-based programs should be asking “How can we keep these relationships going once they have tackled the targeting things we ask them to achieve?” Without exploring that, we may minimize the value of school-based mentoring and deny mentees and mentors the opportunity to grow something brief into something quite monumental.


For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resourcessection of the National Mentoring Resource Center site.

Friday, 19 August 2016 11:02

CASASTART

Evidence Rating: No Effects - More than one study

Date: This profile was posted on December 04, 2012


Program Summary

A community-based, intensive case management model that aims to prevent drug use and delinquency among high-risk adolescents, ages 11 to 13. This program is rated No Effects. It had a few significant effects on behavioral outcomes, but did not have a significant effect on youths’ drug use or involvement in property crime. An evaluation showed negative effects of the program on prevalence and frequency behavioral measures for female participants.

You can read the full review on CrimeSolutions.gov.

Friday, 19 August 2016 11:08

Family Check-Up (FCU) for Adolescents

Evidence Rating: Promising - More than one study

Date: This profile was posted on March 23, 2015


Program Summary

A family-centered preventive intervention designed to assist families with high-risk adolescents, ages 11–17. The goal is to reduce the growth of adolescents’ problem behaviors and substance abuse; improve parenting skills; and reduce family conflict. The program is rated Promising. Students who received FCU services showed significantly less growth in antisocial behavior and substance use as well as a stable GPA from the start of middle school into high school.

You can read the full review on CrimeSolutions.gov.

Page 1 of 4

Request no-cost help for your program

Advanced Search