Key Evaluation Considerations

There are several considerations that should be kept in mind when using the instruments recommended in this Toolkit as part of an overall formal evaluation of a mentoring program’s effectiveness:

1. Ensure fidelity to a program’s design, including the expected roles and behaviors of mentors, as well as the quality and duration of the mentoring relationships established before investing any resources in impact evaluation.

Simply put, make sure that a mentoring program is both being implemented as intended and fostering the desired types of mentoring relationships before trying to determine whether its services are effective in benefiting participating youth. Skipping this step is unfair to mentors, youth, and other stakeholders who deserve an accurate assessment of the ultimate outcomes of their efforts.


2. Always evaluate impact in the context of a relevant and plausible counterfactual (i.e., what youth outcomes would be in the absence of program involvement).

This can be a challenge for many mentoring programs, especially ones that operate in remote communities or within small populations. But there are many options for addressing this concern and the NMRC can provide technical assistance to mentoring programs in helping them determine how to best find a comparison or control group of youth.


3. Always use a well-developed program logic model to guide both process and impact evaluation activities, including especially the selection of what to measure.

Programs may want to tightly focus their evaluation efforts on the outcomes that they are likely to achieve based on program activities and areas of emphasis. This can save time, energy, and money when it comes to conducting evaluation activities and is likely to increase the chances of finding meaningful positive impacts.


4. Always assess targeted youth outcomes using well-validated measures.

This is, in essence, why this Toolkit was developed. An unproven instrument is not a good choice for “proving” anything about your program’s results.


5. Prioritize assessment of more proximal (i.e., initial and thus relatively immediate) anticipated youth outcomes over those that are more distal (i.e. emerging over relatively extended periods of time and likely to be contingent on attainment of proximal outcomes).

Many mentoring programs are designed to help youth grow in ways that set the stage for eventual “big” outcomes like graduation, entering the workforce, or overcoming a major life challenge. But those outcomes can often be elusive and are subject to a number of forces outside of a program’s control. Start by measuring whether your program is effectively preparing youth in subtler ways for those big goals ahead.


6. Always allow enough time for targeted youth outcomes to realistically be influenced by program participation.

While shorter-term mentoring models have shown some ability to be impactful for youth, looking for big changes early in a mentoring relationship will usually not be realistic. The result of such premature measurement can be that a program seems like it’s less effective than it may actually be.


7. Always collect youth outcome data from all participants, not only those who have desired amounts of program involvement (e.g., mentoring relationships lasting a full school year).

Part of assessing the value of a program is determining whether it can actually deliver effective and impactful services to all the young people who it sets out to serve. A truly accurate picture of what a program has achieved requires that evaluations include outcome data from the youth who quit, left, or otherwise could not meet the ideal level of participation.


8. Always have individuals with formal evaluation training and experience involved in designing, conducting, and analyzing and reporting the results of a mentoring program evaluation.

The training and skill set needed to effectively design and lead an evaluation may be available internally within some mentoring programs. For most, however, there will likely be need for external assistance. Indeed, as the level of “evidence” desired increases (e.g., persuasive evidence of impact on youth outcomes), the more complicated and technically demanding the required evaluation activities (e.g., data collection and analysis) are likely to be. So make sure your program has access to the skills it needs to provide stakeholders with accurate and compelling evaluation results.


9. Test for differences in program implementation fidelity, mentoring relationship duration/quality, and effects on youth outcomes.

There are many, many factors that go into whether a program is able to be implemented as planned, establish and sustain high-quality mentoring relationships, and achieves meaningful outcomes. These include the backgrounds and characteristics of the youth served, the skill and experience levels of mentors and staff and features of the program’s design that may vary over time or across settings. Examining as many of these potential contributing factors as possible within an evaluation will help to paint a more accurate and nuanced picture of why a program did (or did not) achieve its goals.


10. Always evaluate potential harmful effects of program participation on youth (e.g., adverse experiences with mentors).

It can be hard to think of mentoring as a negative experience. Yet, the reality is that sometimes mentoring relationships (like all relationships) can be challenging or even harmful experiences. Make sure your program fulfills its ethical obligations, in part, by at least “doing no harm” to the youth served.


11. Recognize the limitations of what it is possible to do in an evaluation.

While evaluations of all types can yield valuable and actionable information for programs, they must be done well whatever their purpose. Consider, for example, impact evaluations in which the aim is to assess the effects of program participation on the outcomes of youth served against a comparison group. Such evaluations are invariably highly challenging to implement and can be expected to necessitate considerable investments of time not only from program staff, but also persons with advanced training in program evaluation and statistical analysis. Likewise, smaller programs may face additional challenges in generating enough statistical “power” to draw firm conclusions about their effects due to their small numbers of participants. Programs (big or small) can also face challenges around creating a relevant control or comparison group. This may especially be the case for so-called randomized control designs given that these may denying or at least delaying mentoring opportunities to some youth, which has moral, ethical and sometimes funding implications. With such considerations in mind, programs should take great care in choosing evaluation aims and approaches that best fit their size and resources.


12. Report evaluation findings accurately and honestly.

It’s understandable temptation for mentoring programs to present results of evaluations of their services in the most favorable light possible or to underreport findings that seem to question their efficacy. But, even poor or mediocre results are a positive in that they provide the needed information and impetus to improve a program’s services. And, even more importantly, stakeholders deserve an honest accounting of the program’s successes and failures. Their investments of time, energy, and other resources (e.g., funding) should be honored with accurate reporting and the use of evaluation findings as a foundation for program improvement.


Tips for working with an external evaluator

One of the biggest decisions around program evaluation is whether to work with an external evaluator or to try and collect, interpret, and report on program data “in-house” using existing staff (i.e., an “internal” evaluation). A good starting point for making this decision is to take a look at the tasks associated with doing a quality evaluation and see how many of these your staff feels they can reasonably handle themselves given their existing job duties and skill levels:

Tasks often required for both internal and external evaluations:

  • Train staff on data collection procedures;
  • Set up systems to collect, store and organize key information;
  • Train staff on the use of these systems;
  • Regular, consistent staff use of these systems (e.g., inputting data);
  • Staff administer assessments (e.g., surveys) to participants and/or encourage their completion.

Additional tasks required for internal evaluations:

  • Design the evaluation;
  • For strong outcome evaluations, identify and collect data for a comparison group;
  • Identify and, as needed, develop tools to assess outcomes of interest;
  • Track administration of these tools;
  • Compile data from systems and tools used to collect information;
  • Statistically analyze the data to answer questions of interest;
  • Summarize the findings;
  • Develop a communications strategy for dissemination.

If you do decide to work with an external evaluator, picking the right one can be surprisingly complicated. The following tips can help you find an external evaluator that’s right for your needs:

  • Start by studying other evaluations conducted by this evaluator (look at tone, focus, and accessibility, particularly to the audiences you are trying to reach). You’ll also want to ask several questions of potential evaluators:
    • How much input will you have on the measurement tools that are used?
    • How long will the study take and what will be required of your participants?
    • Will the study they have in mind answer the questions you want/need answered?
    • When will findings be shared with you?
    • How much staff time will you need to contribute and when?
    • Will results be disseminated more broadly or can they be solely for your internal use? (Most evaluators, especially if they are low or no cost, will only work on the study if they can disseminate the findings more broadly.)
    • Who will be working with you on the project (who is on the study team)? And how much time will each person on the team have on the project?
    • What kind of input will the partner allow in any reports that are disseminated on the study? (To remain “external”, most evaluators will have the final say on any formal evaluation reports that are prepared and disseminated.)
    • What will the final report include? Who will it be shared with?

  • Once you set up a relationship with them, make sure to create an MOU that outlines:
    • Specific roles for both your program and the evaluator—how much time (and what roles) your staff will need to devote to the study;
    • How much time will be required from your program participants; o Cost (Is it a “fixed” cost budget where you pay a set amount no matter what or a budget that may increase or decrease depending on what’s needed to complete the study?);
    • What kind of products will result from the study (reports, briefs, etc.)
    • How much input you will have in any report that gets written (e.g., how much time you will be given to review any products, what kind of input you will able to have); o What questions will be answered in the evaluation;
    • Timeline for the project;
    • Who will own the data; and
    • Who will be able to use the data going forward.

Budgeting for an External Evaluation

Costs for an external evaluation vary widely depending on who you partner with and the questions you want answered. Process evaluations can be less expensive because much of the required data can come directly from your program. But they may require more staff time to set up systems to allow an evaluator to assess your program’s activities. Outcome evaluations may not require as many systems to be in place, but will require more of your participants’ time and, in some cases, staff time to collect some or all of the data. They are also very time intensive for evaluators so are typically more expensive.

You’ll need to budget staff time for:

  • Regular meetings with the evaluator;
  • Training staff in study consent and data collection procedures;
  • Setting up systems (spreadsheets, databases) for data collection;
  • Training staff in the use of these systems;
  • Regular staff use of these systems (e.g., inputting data regularly);
  • Reviewing tools selected or developed;
  • Administration of these tools (e.g., administering surveys);
  • Tracking administration of tools (i.e., who has been assessed and when);
  • Reviewing reports and other communications about the project;
  • (in some cases) Administering surveys to participants, including potentially those who are no longer attending your program. 

Request no-cost help for your program

Advanced Search