Monitoring and Evaluation

*Note: The National Mentoring Resource Center makes these “Insights for Mentoring Practitioners” available for each program or practice reviewed by the National Mentoring Resource Center Research Board. Their purpose is to give mentoring professionals additional information and understanding that can help them apply reviews to their own programs. You can read the full review on the National Mentoring Resource Center website.


Of all the things that practitioners can do to ensure that they are running high-quality mentoring services and meeting the needs of youth, families, and communities, monitoring and evaluation of mentoring relationships and the implementation of the program is perhaps the most important. Research in the field of implementation science has clarified just how important monitoring and evaluation is to the refinement and improvement of human service organizations generally (For a synthesis of research on this topic, see Implementation research: A synthesis of the literature by Dean Fixsen and colleagues.) In essence, the aim of monitoring and evaluation is to hold a mentoring program accountable that they are actually doing what they say they will do. The hope and expectation is that monitoring and evaluation, when put into practice, helps the program to ensure that participants are getting the services and support promised while also allowing the program to improve its design, service delivery, and staffing based on relevant feedback and rigorously tracked data.

Unfortunately, monitoring and evaluation, as noted in the profile, can be time consuming and labor-intensive and, thus, may often be neglected within the competing demands on programs and their staff. These activities may also place an undue burden on mentors, mentees, and other stakeholders in the match if data gathering is intrusive or distracts from the delivery of other important aspects of the service. In short, while the potential upsides of monitoring and evaluation are clearly there to see, so are downsides. Just as with any other area of practice, research offers a window into how the net effects of these processes play out under different scenarios.

Recognize that monitoring and evaluation happens at various levels

Often when programs think about doing monitoring and evaluation they envision complicated and costly studies that require reams of surveys and huge chunks of staff time (or an expensive and intrusive external evaluator). But in reality, these activities can take place at a few different levels of breadth and intensity:

  • Implementation tracking – This task focuses on collecting data about the delivery of program services to mentors, mentees, parents, and other stakeholders. What is being measured here is whether participants are experiencing the program as intended and whether the staff is doing what they should to align with the policies and procedures of the program. For this type of monitoring, programs can track the presence, consistency, or quality of a particular task or program practice. For example, in the case of pre-match mentor training, programs can choose to track whether training is offered at all (presence), whether mentors actually show up and finish the training (consistency), and if mentors found the training helpful and learned anything (quality). This kind of implementation tracking may help identify key aspects of the program that boost its success, as well as weak spots that may be hindering the eventual outcomes experienced by mentors and mentees. And once a program implements the systems to do this once, they can set benchmarks for future success and track their incremental improvement in service delivery over time.

  • Outcome monitoring – This describes systematically tracking the outcomes of youth participating in mentoring  typically using pre-post assessments focused on the areas of outcome that are most closely tied to program goals  but in the absence of a rigorously constructed control or comparison group of youth that did not receive mentoring. Unfortunately, while this type of evaluation activity can provide some information as to the extent to which youth in the program are improving in intended areas, it completely lacks the ability to show that the program is responsible for the change. It also may keep programs from showing that their services are making a positive difference even in the absence of improvements over time among mentored youth on outcomes of interest. How could this be? Consider, for example, that this type of pre-post information gathering might show that mentees are experimenting more with drugs and alcohol. But perhaps a comparison group of unmentored youth might be showing that they are experimenting at far worse rates. That would be missing information that hides the true impact of the program. All that stakeholders would see is that the program’s mentees are engaging in more of these undesired behaviors, much to the program’s detriment.

  • Impact evaluation – This is where it all comes together, with comparisons to groups of non-mentored youth that are rigorously constructed to ensure initial similarity to mentored youth that are coupled with implementation tracking information that might explain why the program did or did not achieve its desired outcomes. Programs may not engage in this activity very often to do financial and staff constraints, but all mentoring programs should eventually attempt to do an evaluation at this level of rigor if they can. It’s truly the best way not only of accurately showing your impact on youth that you serve, but also of examining the mechanisms that lead to that impact and areas where the program could become even stronger.

For programs that are interested, the NMRC will be offering a half-day training on the topic of the basics of mentoring program evaluation that will help with all three of these aspects of monitoring and evaluation, teaching programs what to measure and how to measure it based on their theory of change and service delivery. See the NMRC website’s section on Training and Technical Assistance in 2017 for additional details on the availability of this training.

What to monitor to gauge implementation?

The other aspect of this that can seem daunting to practitioners is the sheer volume of things that could be monitored. One could, in theory, track every single activity and task, creating a huge volume of data entry and mountains of information that may or may not be useful. But there are several things that mentoring programs may want to consider tracking as they deliver their services:

At the time of participant intake and before the match –

  • Demographics and characteristics of participants – Are mentors, youth, families the ones you are targeting in your recruitment?
  • Ratio of mentor inquiries to serious applicants to matches – If you are getting lots of nibbles from prospective mentors but few make it into the program and to a match, there may be issues with your customer service.
  • Time from acceptance into the program to being matched (for both mentors and youth)
  • Participation in training – Do they show up?
  • Knowledge gained from training – Did they learn anything?

After the match is made –

  • Quality of the mentoring relationship – Do participants report that they are in a mutual, rewarding relationship? If not, there is abundant research and practice-based wisdom that suggests it likely will be hard to see the impact you expect.
  • “Dosage” of mentoring – Required amount of meeting between mentors and youth over the required time
  • Consistency and quality of match support – This is critical to any goals related to strong and long matches. If your program is not checking in and being supportive as intended, it may mean trouble for your expected outcomes.
  • Adherence to closure procedures – A shockingly neglected aspect of running a program that theoretically when not attended to has the potential to negate the impact of even happy, strong matches. How mentoring ends can really matter.

Tips for doing impact evaluation

The NMRC has provided an excellent list of tips for conducting an impact evaluation as part of our Measurement Guidance Toolkit, so we recommend starting there. But in general, there are several cautions that programs should keep in mind of they really invest in a rigorous impact evaluation:

  • Don’t look for too many outcomes at once – It’s tempting to see if your program is making a difference in unexpected ways, but if you swing and miss at too many outcomes it can make you look less successful than you really are.

  • Don’t look too far out – Remember that you can’t control what happens when they leave your services; so don’t propose outcomes related to distal achievements like high school graduation or entering the workforce unless your program works directly with young people on those immediate goals. Instead, focus on more short-term things, like changes in attitudes or behaviors, which might set youth on the path to achieving those things eventually.

  • Don’t cherry pick your results! – This should go without saying, but it’s unethical to bury any bad news that you find about your program’s delivery or results. Instead, own the reality of the situation and figure out how to continuously improve and grow stronger.

  • Don’t report youth achievements or changes as proof of your work without a counterfactual (i.e., comparison to a rigorously established control or comparison group of youth not involved in the program). As noted previously, mentoring programs are often quite guilty of this. Remember, those gains in grades or test scores might be the result of a hundred things that have nothing to do with your services. So be honest about what you have proof of achieving.

  • Don’t rely on homegrown surveys and scales – This is another common concern when examining outcomes. An untested tool can obscure the true impacts of a program an outcome in a number of ways. The NMRC’s Measurement Guidance Toolkit was designed to alleviate this exact issue.

For more information on research-informed program practices and tools for implementation, be sure to consult the Elements of Effective Practice for Mentoring™ and the "Resources for Mentoring Programs" section of the National Mentoring Resource Center site.

Request no-cost help for your program

Advanced Search