Displaying items by tag: School environment

Monday, 17 August 2020 21:51

Citizen Schools Extended Learning Time Model

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the Crime Solutions website.


 In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “No effects” (that is, a program that has strong evidence that it did not achieve justice-related goals).

1. A commitment to evaluation over time can have tremendous benefits.

The commitment of Citizen Schools to program evaluation is very impressive, with many studies over the last two decades being referenced for this review—far more than the NMRC typically sees when doing a review for Crime Solutions. It’s not an exaggeration to say that Citizen Schools has been continuously evaluated since 2001, with several cohorts of youth tracked not only through their time in the program, but also into high school through the completion of their senior years. These evaluations examine program implementation and strengths and weaknesses of the program model, as well as student short-term gains and longer-term trajectories.

This is a brave choice for a program, especially in terms of tracking the long-term outcomes of participants well after their time in the program. There is no guarantee that the work of mentors and other caring adults will resonate for youth over time, as the wisdom of a mentor and lessons imparted may fade over time and in critical moments. But Citizen Schools should be applauded for testing to see if the work they do in middle school has an impact on the high school experiences of the youth they serve. The program is designed to not only address immediate academic needs through an extended learning time approach, but also to provide enrichment, skill-building, and future planning that one would expect might change the long-term academic fortunes of the youth served. The only way to find out is to collect the data.

The drawback comes when the results are less than expected, when all the life that happens in the high school years washes away some of the growth that happened in the program. The two major evaluations here paint a pretty different picture, with the evaluation of the original Boston site showing evidence of some meaningful benefits for participants’ academic trajectories and the more recent evaluation of sites that had expanded out nationally showing little to no discernible long-term impact for the full sample.

But it is also clear from these studies that Citizen Schools is a data-driven organization that has made several changes to their practices and implementation over the years as the annual data from these evaluations came in. Readers are encouraged to read the evaluation reports that informed this review, as they offer a good example of a program telling its story and making changes over time through continual evaluation activities. And that early evaluation provided the evidence they needed to expand their model to other cities, which is but a dream for many providers who want to take their work to scale. So regardless of the findings of their impact evaluations, Citizen Schools deserves a lot of credit for doing this level of process and long-term outcome evaluation in the first place.

2. Bringing in a “second shift” isn’t easy.

Because Citizen Schools included process, or implementation, evaluation as part of that ongoing research work, they have been able to learn a great deal as to what works in their model and where the sticking points of implementation may lay. Practitioners who are developing similar programming can learn a lot about the challenges of engaging a “second shift” of adults to come in and pick up where the first shift of teachers and school personal leave off. Arguably the biggest hurdle in making that work is to coordinate the efforts of Citizen Schools staff and volunteers with the content of the school day and the culture of the school. At some of their sites, this coordination was noted as a major challenge, but the seemingly more successful sites found a way to share information with the Citizen Schools staff and those teaching the apprenticeship classes. This ensured that homework help was in accordance with how the subjects were being taught during the school day and that information about particular students’ struggles was shared so that they could receive some individualized support.

This “overlap” of shifts also allowed the program to mirror the behavior and discipline policies and procedures of the school. So, for example, if the school was implementing a behavioral curriculum that set up rewards for good behavior, the program could also offer that and give youth consistent messages about their behavior.

But it also sounds like managing student behavior was a frequent issue at many of the sites as noted in participant surveys. One can imagine that a staff comprised mostly of younger AmeriCorps volunteers and a group of volunteer mentors might struggle to manage large groups of youth in an afterschool setting where the normal disciplinarians of the school day are not around.

There were also issues related to staff turnover and the model actually requires Teaching Fellows to transition out of their role after two years. This means that the people responsible for running the program and who have built relationships with parents and figured out how everything works best in a particular setting simply move on after two years of building up that institutional knowledge. That’s a lot of experience and relationship walking out the door. One wonders if the model might have more stability if those Fellows were regular paid staff who could stay in place over time.

But, as one of the evaluation reports notes, Citizen Schools learned all these things as a result of their process evaluation work and, as the program expanded nationally, they used this information to codify the requirements and structures of the program in ways that appeared to improve implementation over time, if not the overall impact results. So once again, the lesson is that one can’t learn lessons if one isn’t looking for them in feedback and data.

3. If you build it, can it be maintained?

The most recent evaluation report noted that sustainability after the initial seed money implementation was challenging for many schools. This is not surprising as the Citizen Schools model prioritizes resource-deficient schools and communities, with the idea that if they can tap into the strengths in that community they can build a lasting programmatic infrastructure.

But that seed money notion is always harder to make work in real life and the evaluation makes it clear that coming up with the next round of funds to keep the program going was a common challenge. One of the main challenges was the turnover in school or district leadership, which often saw program champions move on and left the program without key advocates who could make sure it was included in future budgets or who could raise funds from different sources. Practitioners working on a start-up grant would be well advised to continue to find allies and advocates who can champion the program, as there is no guarantee that the original supporters will remain in the mix over time.

4. The mandatory vs opt-in debate is challenging.

The original iteration of Citizen Schools in Boston was offered as an opt-in option for youth who were interested in the offerings of the program. This approach often resulted in a strong critical mass of youth participants who really wanted to be there. It also likely kept the program at an appropriate size and scale for the local resources available.

As Citizen Schools expanded nationally, it seems that they decided to make their model mandatory for all youth in particular grades. This likely facilitated stronger participation and buy-in from the school staff and parents. Yet, it also meant that some youth were mandated to be there when they would have rather been doing something else with their time. Anyone who has worked with middle schoolers knows that they love their first tastes of autonomy and greatly value their independence and free time. From the feedback gathered in the evaluation, it sounds like this was an issue for some of the sites, as youth resented being held at school for several more hours, even if they were likely getting some needed help out of the experience.

There are examples of other programs offering “whole grade” mandatory models. For example, the Peer Group Connection program led by the Center for Supportive Schools employs a model where an entire freshman class is mentored for the duration of their first year of high school. But those activities happen during the school day and are integrated into the freshman experience. It seems like that dynamic changes once the clock hits 3:00 and students start thinking about life outside the school walls. So while there are good reasons to go with a mandatory “whole grade” participation approach, it can have downsides that practitioners need to be aware of, especially when working with older youth who have rich lives and a sense of autonomy to nurture.

5. High school “access” work can be good practice for college access work.

One of the striking things about the original Boston-based evaluations of Citizen Schools is just how much effort the program put into steering participants to what the program termed “high-performing” high schools. In fact, one of the major metrics they focused on was how many alumni went to and persisted in one of these schools. And if you look at how youth were supported in this transition, it looks a lot like the work of college access programs: providing youth and parents with information about various high schools they could apply to, touring schools to assess fit, helping with the application form and financial aid considerations, etc. While the evaluations of Citizen Schools did not track cohorts of students into college, one wonders if practicing that selection and application process for high school built skills that proved handy when applying to higher education.

Now, not every part of the country has a high school system that operates with the same type of choice that Boston’s odd mix of private schools, parochial schools, charter schools, and normal public schools does. Students there have a lot more choice and mobility that do students in many other cities. But teaching these skills about how to choose and apply to the right school may be tremendously valuable, and it’s a shame that the evaluation didn’t look at whether those skills were helpful years later when youth were out of the program but doing something similar in the collage application process.

6. It’s always good to read honest discussions of research limitations and biases.

One notable thing about the two main evaluation reports of Citizen Schools noted here is just how clear the authors were in calling attention to the potential for false positives, biased samples, and other limitations of the research that may have clouded the findings. Any good evaluation report will have a limitations section, but those can sometimes downplay the severity of the issues or gloss over flaws in data collection or analysis. But that was not the case here.

The 2010 study by Arcaira and colleagues notes that there is a strong possibility of hidden bias in the results, mostly caused by the fact that those youth who opt into Citizen Schools may be different than those who don’t in meaningful ways. This possibility threatens the validity of even their matched comparison sample. It’s also worth noting that when a program participant dropped out of the study, they were replaced by a student who did not, meaning that the program did not endeavor to offer support to program drop outs, focusing instead only on those students who stuck with it. Once again, this might mean that they have strengths or supports that other students do not, but that are unaccounted for in this design. Non-participants in the comparison group who dropped out were also replaced by others who persisted. The report authors ultimately conclude that these and other factors may have overestimated the impact of the program, or at least confused things to the point where the validity is in question. And for an evaluation where many of the positive results were of small statistical significance, that’s a real issue.

The 2016 evaluation by Abt Associates offers a similar honest assessment of validity threats. It notes that data around participant experiences was not collected in a fully structured way and that not all youth or staff answered the same questions or participated equally in the data collection. The evaluation also notes the general research concern that most of the participant reports were done via self-report surveys, which can be positively biased as respondents give answers that are socially acceptable or “nice” but not necessarily accurate (e.g., program participants may be prone to report overly positive outcomes to help the program look good or even to cognitively justify their own time investment in program participation).

The Abt evaluation notes another issue that impacted their ability to tell a full story: variation in implementation across sites. The authors stress that Citizen Schools sites varied considerably in their implementation. Some of that is due to local customization that is entirely appropriate to the model and the available local resources. But much of it also appeared to be the result of implementation challenges, staff turnover, dwindling funds, or a poor fit between the program and the school conceptually. This leads to an evaluation context where sites are doing potentially radically different things while the evaluation tried to say something global about the “model.” If that model varies too widely, you wind up comparing apples to oranges to the whole fruit basket. It’s hard to say whether the Citizen Schools model was effective when there was so much variation in what youth were offered and received.

But it’s refreshing to see evaluation reports that honestly describe these concerns. Practitioners should always insist that their evaluation reports provide similar context around the strengths and limitations of the evaluation design and available data.


For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources" section of the National Mentoring Resource Center site.

Monday, 17 August 2020 22:01

Citizen Schools Extended Learning Time Model

Evidence Rating: No Effects - More than one study

Date: This profile was posted on July 13, 2020


Program Summary

This is an afterschool program that prepares middle school students for academic and social success. The program is rated No Effects. Participants showed statistically significant higher rates of attendance and a greater likelihood of being on track to graduate and passing 12th grade English/language arts (ELA) than nonparticipants. Groups did not differ in ELA or math test scores, 12th grade suspensions, passing ELA and math comprehensive tests, or on-time promotion to 12th grade.

You can read the full review on CrimeSolutions.gov.

Tuesday, 15 September 2015 14:31

Cross-Age Mentoring Program (CAMP)

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the Crime Solutions website.


In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “Promising” (that is, a program that shows some evidence that it achieves justice-related goals when implemented with fidelity).

An emphasis on “connectedness” and how youth relate to the world around them

One of the most compelling aspects of the CAMP model is the role that “connectedness” plays in the design and delivery of the program services. Many mentoring programs generally claim to emphasize making mentees feel more “connected” to other people and their communities. This seems to be almost inherent in programs that intentionally pair youth with new caring adults (or, in the case of CAMP, teenage mentors).

But the CAMP model brings this notion of connectedness to the forefront. The program theory builds on previous research demonstrating that successes in school and decreases in risky behavior are more likely when youth express feelings of connection to the people, places, and activities in their lives. And this definition of “connectedness” goes beyond just simply “liking” somebody or somethingit also includes the notion of active engagement and support seeking. This form of active connectedness is baked into the curriculum-based activities and mentor-mentee interactions in the CAMP program. The activities are designed to help youth with both their social and perspective-taking skills, allowing them to feel more comfortable interacting with the world around them and to be able to relate better with other individuals, such as teachers, family members, and peers. The structure of the mentor-mentee interactions also promotes connectedness by allowing participant to openly share their feelings about their mentoring relationships (both positive and negative), express feelings of support, and practice saying “goodbye” to one another so that feelings of connectedness between mentor and mentee don’t evaporate at the end of the match. By helping mentees connect to and better navigate their world, CAMP promotes true development of the mentee and builds skills that should help long after program involvement. The evaluations of CAMP to date have shown that it can promote connectedness in mentees, especially in the domains of school and family. Programs that want to measure their own results around notions of connectedness may want to use the same measure as the CAMP evaluations: The Hemingway Measure of Adolescent Connectedness, which can be accessed online at: http://adolescentconnectedness.com/.

A “super” way of involving parents...

One unique aspect of the CAMP model is the structured way that it involves parents and extends the mentoring experience. The program uses what are called “Super Saturday” events as a primary way of involving parents in the program and further promoting notions of connectedness and positive social interaction. These all-day events bring mentees and their families together with mentors and program staff once a month for a Saturday of games, food, and other activities. This gives parents a chance to get to know the mentor and staff and allows the mentees to connect their mentoring experience to the relationships they have at home. In one version of CAMP implementation, these Super Saturdays are a nice supplement to the every-week mentoring experience and can bridge these matches over the summer months. In the “faraway” version of the program (mostly implemented in rural areas where transportation is a challenge) these events actually form the heart of the intervention, with most mentor-mentee interactions taking place at Saturday events, with an increase in frequency over the summer months. Programs looking to increase parent involvement without the challenges associated with mid-day or afterschool events may want to consider creating a fun and engaging Super Saturday of their own as a way of bridging the worlds of school and home and allowing parents to see their youth participating in this new kind of friendship with an older student or adult.

Allowing peer mentors to truly own the program

Because CAMP takes its “developmental” approach to heart, it’s also important to recognize the ways in which it may promote the growth and development of the peer mentors. In fact, one of the neat things about the CAMP model is that the full implementation of the program would ideally allow youth to start in the program as elementary-age mentees, then transition into middle school protégées (mentors in training, essentially), before finishing out the program as high school-age mentors. Michael Karcher, developer of the CAMP program, describes this as “walking up the developmental ladder” and has designed the program to facilitate the journey from mentee to eventual mentor. CAMP can be implemented without all of these transitional layers (more as a typical high school-to-elementary student peer mentoring structure). But, for districts that could facilitate the full implementation, CAMP offers a great way of fostering multi-year involvement with youth changing their role over time. One can imagine that high school mentors who once served as mentees might really understand the value and approach of the program and would take their stewardship of the program seriously.

The other way that CAMP seeks to provide peer mentors with ownership of the program is by giving them an active role in designing new activities for matches to do together. The implementation materials for CAMP do provide a 36-week curriculum of meaningful activities. But the adults running the program are strongly encouraged to allow the peer mentors to make changes to the activities and suggest new ones from year to year. This gives them an active role in customizing the program to their local circumstances and needs, while providing a great leadership opportunity and a chance to “own” the future of the program.

Because the peer mentors are allowed to guide and shape the program to this degree, they may benefit from the experience as much as the mentees. In fact, a 2009 evaluation of CAMP found that peer mentors improved in their own reported feelings of academic connectedness and self-esteem compared to a similar group of students not participating in the program. Thus CAMP can be viewed as a model that may produce meaningful outcomes for multiple groups of youth simultaneously.


For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources for Mentoring Programs" section of the National Mentoring Resource Center site.

Friday, 19 August 2016 09:56

Cross-Age Peer Mentoring Program

Evidence Rating: Promising - More than one study

Date: This profile was posted on October 06, 2015


Program Summary

The program is a school-based peer mentoring program in which high school students provide one-on-one mentoring to late elementary and early middle school students. This program is rated Promising. The mentored children showed significant improvement on measures of spelling achievement and connectedness to school and to parents compared with the control group. However, mentored and control group children did not significantly differ on connectedness to reading, future, or friends.

You can read the full review on CrimeSolutions.gov.

APRIL 15, 2016
BY: DELIA GORMAN, PROGRAM MANAGER, MENTOR: THE NATIONAL MENTORING PARTNERSHIP

An Interview with Camille Stone, Program Director of the Remote Tutoring and Mentoring Program at We Teach Science

What if the right technology could make mentoring programs safer? What if it could help us better support our mentors, while preparing young people for a brighter future?

As a part of our monthly Collaborative Mentoring Webinar Series, MENTOR facilitated a webinar this February called “Mentoring in the Age of Technology”, which explored the impacts of technology on youth mentoring, and featured seasoned mentoring practitioners who use technology as a mentoring tool.

 We Teach Science 
Published in NMRC Blog
Friday, 19 August 2016 10:02

Eisenhower Quantum Opportunities

Evidence Rating: Effective - One study

Date: This profile was posted on September 08, 2015


Program Summary

Also known as the Eisenhower Foundation’s Quantum Opportunities Program, the program is an intensive, year-round, multicomponent intervention for high-risk minority students from inner-city neighborhoods, which is provided throughout all 4 years of high school. The program is rated Effective. Program participants had significantly higher grade point averages, high school graduation rates, and college acceptance rates as compared with control group youths.

You can read the full review on CrimeSolutions.gov.

Monday, 14 September 2015 16:57

Eisenhower Quantum Opportunities

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the Crime Solutions website.


In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “Effective” (that is, a program that shows evidence that it achieves justice-related goals when implemented with fidelity).

“Deep” mentoring as a strategy for more meaningful support

The Quantum Opportunities program offers a nice example of mentoring viewed as inherently a longer-term strategy, not a brief engagement that can hopefully be renewed from year-to-year, if at all. The developers of the program are very intentional here about addressing what they see as a frequent set of issues in much of mentoring programming to date: the “short” (a year or so) duration of most academic-focused mentoring programs (i.e., those seeking to promote positive educational outcomes for participating youth), the limited nature of school-based relationships, and the expectation that measurable outcomes for participating youth will appear rapidly.

Quantum Opportunities is designed as a four-year program with, ideally, the same mentor (mentors were paid staff members and were required to hold a college degree and have experience in youth development) working with a student (called “Associates” in the program) from the beginning of 9th grade through graduation. There are other programs, such as Friends of the Children, that use paid staff in a mentoring role over long periods of time, but Quantum Opportunities has a distinctive focus exclusively on the high school years and all of the challenges that can pop up both in and out of school for youth over that time.

The mentoring offered by Quantum really does emphasize the depth of the engagement in the mentee’s life:

  • Mentors are expected to get to know the Associate’s family and friends and integrate themselves into the existing web of support in the student’s life and in the community.

  • Mentors serve as true advocates for the Associates, working with families and staff to attend parent-teacher conferences and other “institutional” meetings as needed. The evaluation noted that mentors often appeared in court proceedings or other such meetings where youth needed adult support or representation. It was also common for mentors to help Associates find summer employment and achieve other goals.

  • Mentors also teach life skills around topics such as decision-making, personal responsibility, healthy behaviors, and civic engagement. These planned and structured skills-themed discussions are facilitated in in both one-on-one and group contexts. These life skills sessions are designed to give the youth skills that will help them solve problems in the present and thrive after they leave high school.

The program also gives its mentors clear roles and responsibilities within the broader suite of supports, such as dedicated tutoring and youth leadership, Quantum also provides. The program has even sought to determine the ideal number of hours that a student would participate in mentoring, tutoring, leadership training, and the other program activities over the course of a year. As such, this program offers a nice example of how to integrate long-term mentoring into other services and supports and provide a depth of mentoring relationship that fits with the overall theory of change of the program.

Using stipends to motivate and incentivize youth participation

The program provides youth with a modest stipend of $1.25 for each hour of participation, although the criteria for earning this stipend varied by participation level and other criteria across the five sites participating in the evaluation referenced in the CrimeSolutions.gov profile. Most programs shy away from incentivizing youth in this way, but the developers of Quantum believe that providing modest funds can be insrtrumental in facilitating the participation of the older youth they serve in mentoring activities and other program-sponsored events and activities. The funds are also used in the financial management life skills course provided by the program. For programs serving older adolescents, creative incentives like this may be a meaningful way to boost program engagement whole also teaching some financial planning and money management skills.

So did the stipend boost participation? It’s tough to say as the evaluation didn’t specifically address the connection between the stipend and youth motivations. But it did track the hours of participation against the aforementioned “ideal.” The program as designed calls for youth to participate in a total of 410 hours of activities a year, with 180 being spent in mentoring and tutoring activities. In this evaluation, the average student spent 291 hours a year in the program, with 135 hours of mentoring and tutoring. So the youth essentially got about 80% of the mentoring they were supposed to.

But interestingly, if the tutoring and mentoring received was split evenly, that works out to about 67 hours of mentoring a year, which is an hour-and-a quarter a weekright in line with the standard “dosage” of mentoring found in most mentoring programs. So it seems that even though there was an emphasis on “deep” mentoring, that depth was probably a result of the length of match and the value of the activities, not the intensity or frequency of the mentoring meetings. They were at about an hour a week like most of the field.

A strong integration of leadership activities and community engagement

In addition to the mentoring, tutoring, and life skills work, Associates in the program also are required to achieve a personal and community-focused goal to work on during their participation in the program. The program provided leadership training and mentors supported students as they chose, developed, and executed their plans toward achieving their goals. These activities were anecodatlly observed to seemingly allow youth to see themselves as burgeoning role models for their community and to offer valuable opportunities to apply newly learned skills. Many of the personal goals Associates chose were directly related to the program’s overall goals around graduation and post-secondary planning.

Quantum Opportunities serves as a good example of thoughtful program replication and evaluation done with the needs of our field in mind

The Quantum Opportunities program was expanded to the five urban areas in the evaluation through funds provided by the Office of Juvenile Justice and Delinquency Prevention. The Eisenhower Foundation clearly put a lot of thought into the communities where this program might be a good fit and the evaluation details the training and technical assistance provided to help with program implementation as these sites worked out the nuances of service delivery and modified small aspects of the model (such as the stipends) to meet local needs and circumstances. The result was cross-site results that looked remarkably similar, while also producing a wealth of information about how the program appeared to thrive best in each unique community. All of this information is documented thoroughly in the evaluation report referenced above. The report is a wonderful example of research done with the broader mentoring field in mind. It details the thinking behind the replication project, especially how this effort built on a much earlier iteration of the Quantum program, the ultimate results of which made clear substantial room for improvement in the program. It also explains the design, measures, and outcomes very clearly (it helps practitioners when a program and its evaluation focuses intently on three easy to grasp outcomes like this one does). And best of all, it includes a good amount of qualitative interviews where program staff talk about what they felt made the program work in their settings. This qualitative information is a treasure-trove for practitioners, full of all kinds of useful tips, such as the perception across sites that the program’s emphasis on graduation, not grade improvement, as the primary goal really helped youth feel more comfortable in the program. Apparently working slowly toward that long-term graduation goal, with long-term support, felt like a better starting point that emphasizing immediate academic improvements. That makes sense, yet it’s the type of subtle distinction in program design that probably would have gone completely unmentioned had this evaluation not included qualitative data. This information will not only help in future replications of this model, but will also help spread these practices to larger audiences in our field.

One can hope that future efforts funded by both private philanthropies and public agencies like OJJDP will include similarly detailed and useful information on program replication and implementation in their evaluation reports.


For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources for Mentoring Programs" section of the National Mentoring Resource Center site.

Friday, 19 August 2016 10:05

Experience Corps

Evidence Rating: Promising - One study

Date: This profile was posted on August 31, 2015


Program Summary

A tutoring and mentoring program to improve the literacy outcomes of elementary school-aged children at risk of academic failure. This program is rated Promising. Program participants made significantly greater gains in reading comprehension scores and teacher-assessed reading skills over an academic year, as compared with the control group. However, there were no significant differences in vocabulary and word attack scores from pre- to post intervention.

You can read the full review on CrimeSolutions.gov.

Wednesday, 26 August 2015 09:36

Experience Corps

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the Crime Solutions website.


In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “Promising” (that is, a program that shows some evidence that it achieves justice-related goals when implemented with fidelity).

Can tutors serve as effective mentors as well?

The first thing that mentoring practitioners should note about Experience Corps is that it is primarily a reading intervention directed at elementary school students who are struggling with literacy and associated academic performance issues. And it does this reading-focused work very well: The evaluation cited in the review indicates that the program produces significant impacts on reading comprehension and grade-specific reading skills as rated by students’ teachers. An important question for mentoring practitioners is, how much does the relationship between the tutor/mentor and the student factor into those outcomes.

If one goes back 15 years or so in the mentoring field, there was much more conflation of mentoring and tutoring than we find today. Those terms were often used interchangeably as practitioners struggled to define the role of volunteers in programs that had a heavy academic focus, but also used a one-to-one relationship to set the stage for the more “instrumental” work of reading support, test preparation, or other targeted academic support.

Over the years, the tutoring and mentoring camps have grown further apart and are now considered by most to be separate, but potentially related, activities. Experience Corps, however, happily blurs that line. Their model emphasizes the need to create strong relationships between the volunteers and the students. Volunteers are formally trained in strategies for bonding with and engaging their students. This approach echoes the blend of instrumental and developmental relationship approaches that mentoring researchers Karcher and Nakkula have promoted, as well as the “working alliance” concepts used in other programs reviewed for the NMRC.

The Experience Corps evaluation does hint that the quality of this relationship is important in achieving the program’s targeted reading outcomes: The quality of the student-volunteer relationship, as rated by volunteers, was predictive of better student reading outcomes and were associated with gains in two of the four indicators of reading improvement. Unfortunately, the evaluation doesn’t go into much detail about the nature of these relationships, their strengths and struggles, or the amount of time spent on purely relational conversations versus direct tutoring activities. It is worth noting that nearly 1 in 5 (18%) of the relationships were rated as “low quality” by the volunteers themselves, suggesting that the program might consider offering more support to participants around the relationship aspect of the services. But, at least in the case of this program, there does seem to be some evidence that volunteers who by design have a heavily task-focused academic approach to their work can still form strong relationships with students and that those relationships can be help drive program outcomes. A key here may be that this task-focused approach is an upfront aspect of the program that is presumably understood by students from the outset. In fact, research suggests that when mentors take on a heavily academic orientation in more traditional, relationship-focused mentoring programs, the results can be counterproductive.

Dosage matters.

As with any intervention, fidelity to the model is likely to be paramount for Experience Corps achieving its intended outcomes. One of the biggest barriers to that fidelity in many programs can be simply getting mentors and mentees to meet as frequently as they are supposed to. Programs often wonder how many meetings a youth can miss before the impact of the program starts to really taper off. In the case of Experience Corps, the results of the evaluation suggested a cutoff point that will seem familiar to experienced mentoring programs: About once a week. Some students in the program met with their volunteers as many as 96 times over the course of the school year, but the estimated impact of the program really fell off if the student received fewer than 35 sessions, which worked out to about once a week on average. There were probably myriad reasons why volunteers and youth didn’t meet more frequently or regularly, but Experience Corps is now in a position to track relationships against that benchmark and offer additional support to participants who seem in danger of not meeting enough to make an impact. Even without the benefit of a random assignment evaluation, all mentoring programs are encouraged to look closely at their participation and outcome data to try and determine if they too have a minimum “dosage” that they must meet to have a chance at success.

Seniors are an underutilized resource for mentoring programs.

Perhaps the most quietly impressive aspect of Experience Corps is the tremendous investment of time and energy they get from their age 55-and-up volunteers. These volunteers work with multiple students every week and, in this evaluation at least, averaged about 15 hours a week in one-on-one tutoring and mentoring sessions. This represents a substantial amount of “people power” that they are bringing to this program. Older adults might be drawn to several aspects of the programthe modest stipend provided at some sites, the clear purpose of the work and ready-to-use materials provided by the program, the ability to team with other older adults to serve many youth in one school setting.

Unfortunately, the last major analysis of mentors in America found that older adults (specifically, those age 65 and older) are the least likely age group to mentor. Programs should consider how they can make their program model more appealing to older adults, such as perhaps by streamlining the location and methods of service delivery or by helping overcome barriers to participation with financial or logistical support. These volunteers clearly had the motivation and means to contribute to this program and the mentoring field as a whole can learn a lot from studying programs like Experience Corps that have proven strategies for engaging seniors in their work.


For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources for Mentoring Programssection of the National Mentoring Resource Center site.

Thursday, 18 March 2021 15:19

Eye to Eye

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the Crime Solutions website.


In considering the key takeaways from the research on this program that other mentoring programs can apply to their work, it’s useful to reflect on the features and practices that might have influenced its rating as “Promising” (that is, a program that has strong evidence that it achieved several of its justice-related goals).

1. If you’re going to do an evaluation, make sure you describe the program well when writing up the results.

This may seem like a small thing to praise this program and its evaluators for, but it was refreshing to see a mentoring program described in real detail as part of an outcome evaluation. Most published studies like this will describe the program at some level, usually focusing on the demographics of the youth served and their mentors, as well as a general explanation of what the program is trying to accomplish. But this article really sets the standard in terms of the quality and depth of information provided about how Eye to Eye selects schools, selects mentee participants within those schools, the curriculum they use and the activities matches engage in, and even concrete details about how mentors are identified and screened. Far too often, these Crime Solutions reviews and subsequent profiles provide plenty of information about the evaluation, but practitioners are often left wondering how the program achieved their results or how they might proceed if they wanted to do something similar. By providing such rich descriptive information in the study write-up itself, this program has made it much easier for other practitioners to understand the practices that might best support youth with learning disabilities (LD) and attention-deficit hyperactivity disorder (ADHD).

2. Credible messengers once again prove to be valuable assets to others.

One of the factors that seems likely to be involved in the success of the Eye to Eye program is the use of older peers who themselves have LD/ADHD diagnoses as the mentors. Using these youth, specifically, in the mentor role might be crucial in helping mentees feel understood, listened to, and able to visualize themselves being a thriving adolescent or young adult. Mentors without those diagnoses may have also been effective in working with these youth—and the evaluation here did not test this or compare these diagnosed mentors to other types of individuals. But we have seen uses of credible messengers many times before in the mentoring field and there seems to be some real validity to using mentors who know exactly what it’s like to being in their mentee’s shoes. (See the Insights provided for the Arches and My Life programs for examples of other programs using mentors who had faced similar challenges to the youth they are serving.) It’s not surprising that measures of relationship quality for these matches were quite high and correlated with stronger outcomes inn the directions of lowered depressive symptoms and increased self-esteem and personal relationships. Sometimes we learn the most from those who have travelled the same path before us. What’s especially promising about the Eye to Eye approach is that the mentors also appear likely to have gotten a lot out of the experience, taking an ownership role in how the model was implemented in their school and building leadership and project management skills. Even though the evaluation didn’t explore this aspect in detail, one can assume that serving as leaders and mentors like this may have also helped the mentors feel more positive about their own LD/ADHD circumstances.

3. When designing a curriculum, get the best help possible from experts.

One of the great debates in mentoring is whether the impact of mentoring comes from the strong relationship (ideally) formed between mentor and mentee or whether those outcomes are the result of the activities they do together. And while the answer may well be that the best impacts often come from both of those things in tandem, the reality is that for a program like Eye to Eye that is working with youth with serious disabilities and conditions, having a good curriculum that’s designed to facilitate specific learning moments and interactions tailored to those youths’ needs may be critical.

But programs often wonder how to develop a curriculum that is the right fit for what they want to accomplish with youth. Eye to Eye offers an excellent example of how other programs might want to approach this. They started by identifying core socio-emotional objectives based on a longitudinal study of youth with LD/ADHD that had previously identified success attributes that helped those youth thrive. With those objectives in hand, the program engaged a number of groups in designing relevant activities that would speak to those objectives: a team of educators with LD/ADHD themselves, a focus group of young adults, and, perhaps most crucially, faculty and graduate/postdoctoral students at Brown, Harvard, and Columbia Universities. That’s a lot of expertise and different viewpoints all contributing to the formation of these activities. In the end, the program had developed an arts-based curriculum that uses fun and creative hands-on projects to get mentors and mentees talking about strengths and challenges related to their LD/ADHD. This is an excellent example of how researchers, subject matter experts, and client voice can all be harnessed to produce something that is custom tailored to the youth that are the focus of the program. And, because Eye to Eye is so transparent about their model (see point #1 above) they even make a national office email available in the study write-up for those who want to learn more about it or adapt it for their programs.

4. If you are evaluating your program and want a comparison group, make sure you know if those youth are being mentored somewhere else.

One of the nice things about this evaluation is that they compared the outcomes of mentored youth in the program with two different comparison groups: unmentored youth with LD/ADHD at similar schools and youth at similar schools who did not have LD/ADHD. A small but important detail in setting those groups up is that they excluded youth from both groups who indicated they were receiving mentoring through some other program. In this case, they didn’t want to compare Eye to Eye against other programs or service providers, but against a hypothetical situation that isolated the influence of Eye to Eye mentors compared to very similar youth who didn’t have the services of the program.

Now, this study was not a true random assignment design—they did do this kind of purposeful restricting of who got into those comparison groups. So while this may not have been a pure experiment, the program also seems to have avoided one of the big reasons that many mentoring program evaluations struggle to show results: the comparison kids going and finding mentoring somewhere else. While youth in the comparison schools were not restricted from seeking out and receiving other services, it’s likely that few did given the somewhat novel intervention offered here, which the authors note had few comparisons in the literature.

When significant numbers of the youth who are the counterfactual to the work of the program go and get similar services from somewhere else, it can wash away differences between the two groups at a meaningful level. You are no longer comparing mentored and unmentored youth, you are comparing mentored and differently mentored, and given the tight thresholds used in the statistical analyses of these types of evaluations, that can often make the difference between having several positive, statistically significant findings and having results that look like the program achieved nothing.

Another good example of this phenomena can be found in the Insights we wrote for the SOURCE program. That program emphasizes working with youth to apply to college the following year and does a lot of work to facilitate that process and get parents on board. The program did seem to do an effective job of getting youth to apply, which was good news. However, their evaluation also found that almost 94% of their comparison group youth applied to college as well, often with the help of other services and programmatic support, either through their school or from other similar nonprofits. The end result is that it looked like there program was no better than just “business as usual.”

Now, with a goal like college planning and application in mind, it may be a good thing that such a high percentage of the comparison group did apply—after all this is their time to do it and it would be impossible to tell a family to defer that decision for a year just for an evaluation trying to test the results of one program.  But time and time again we see mentoring program evaluations that are undone by comparison groups of kids getting mentoring from other programmatic sources (sometimes in spite of promising not to). So this is a nice caution to mentoring programs and evaluators to set up your comparison groups carefully. In the case of Eye to Eye, they removed anyone already being mentored from those groups and seemed to avoid comparison youth getting similar support elsewhere, making it easier for them to show clear impacts that are more easily attributable to the work of the program. Of course, to make sure eventual comparisons are truly “apples to apples,” the programs involved need to be similarly restricted to those without pre-existing mentoring relationships, something which may be useful to also consider for other reasons such as prioritizing access to limited program slots or mentors.


For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources" section of the National Mentoring Resource Center site.

Page 3 of 7

Request no-cost help for your program

Advanced Search