Monitoring and Evaluation

 
  • Evidence Rating for this Practice:

    Insufficient Research (1 Test of the Practice in 1 Study)

    In a test of the practice in one study (a meta-analysis), the practice of monitoring implementation in mentoring programs was associated with better outcomes. In this study, the outcome evidence and the methodology used for assessing effects of the practice satisfied criteria for a designation of Promising. However, because of a lack of additional tests of the practice, the methodology used for assessing effects of the practice did not meet relevant criteria for rigor. As a result, the study was designated as Insufficient Evidence and the practice as a whole is designated as Insufficient Research. This rating is based on currently available research and may change as new research becomes available.

    Description of Practice:

    The practice of monitoring and evaluation involves two components. Monitoring is the routine collection of information as it pertains to individual mentoring relationships within a program, often with a focus on determining compliance with programmatic expectations or standards (e.g., frequency of mentor-mentee contact, topics addressed in support contacts from staff). Monitoring is included as one of the Standards in the 4th edition of the Elements of Effective Practice for MentoringTM (Monitoring and Support). As described in this Standard, monitoring includes obtaining ongoing information about the activities engaged in by the mentor and mentee, the quality of the mentoring relationship, and outcomes identified as important for the mentee through regular scheduled contacts with participants (mentor, mentee, and parent/guardian). Monitoring also may include gathering information about aspects of services provided to a mentoring relationship, such as whether staff support of the relationship is adhering to program guidelines (e.g., frequency, topics addressed). Monitoring practices may include procedures for making use of the information gathered for a variety of purposes, including tailoring of support provided to individual mentoring relationships and supervision of program staff. Evaluation involves more systematic collection and analysis of information with the aim of assessing one or more of the following: (1) need for a program (or practice within in a program), (2) program design and logic/theory, (3) implementation of a program (or practice) and how it is experienced by participants, (4) impact of a program (or one or more practices within the program) on participant outcomes, and (5) program (or practice) cost and efficiency (adapted from Rossi, Lipsey, & Freeman, 2004). Monitoring and evaluation activities are interrelated, in part because monitoring data when examined at the program level can be used to inform evaluation of the program, particularly with regard to its implementation.

    This practice is distinct from other mentoring program practices due to its focus on processes of standard information gathering and analysis in relation to potentially any or all aspects of program as implemented rather than the design or intended procedures of different areas of a program and its operation. Monitoring can, however, support and be incorporated into many practices (e.g., family engagement, match support), thereby helping to ensure that relevant information is tracked and available for use in supporting the effective implementation of the practices involved. Evaluation methods similarly can be used to assess the implementation and impact of practices incorporated into mentoring programs.

    Goals:

    The primary goal of the practice of monitoring is to promote the success of mentoring relationships and positive resulting outcomes for mentees by having programs routinely collect and use information to support matches and ensure implementation of planned programmatic elements. The goals of evaluation vary by type of evaluation. In general, process or implementation evaluations are aimed at informing the development of appropriate programming and practices. Outcome or impact evaluations typically focus on informing understanding of the effectiveness of a program (or practice) for improving youth outcomes. In some cases, the goal may be to learn more about the effectiveness of a practice or set of practices in relation to supporting one or more facets of a program’s functioning, such as volunteer recruitment or efficiency in matching youth and mentors.

    Targeted Forms of Mentoring and Youth Populations:

    This practice is potentially applicable to all forms of mentoring and the full range of youth who may be served by programs. It is possible, however, that some forms of monitoring may be more relevant or a better fit within certain types of programs. For instance, monitoring procedures should be established in a way that fits within the current scale and resources of the program. For highly vulnerable populations of youth (e.g., youth who are homeless), it may be less appropriate to collect some forms of monitoring data if it could potentially put youth at risk (e.g., disclosure of involvement in minor criminal behavior or contact with an abusive family of origin).

    Program evaluation is also potentially applicable to all forms of mentoring and the full range of youth who may be served by mentoring programs. It may be important, however, to match the type of evaluation to the state of development of a program (e.g., a program early in development or currently piloting different services may be better served by an implementation evaluation rather than an impact evaluation, which could underestimate program benefits if various elements are still in development or modification).

    Theory:

    The value of monitoring focused on gathering information about mentoring relationships is suggested by theory and research in which greater mentoring relationship quality (e.g., feelings of closeness and trust) and longevity have been linked to more positive outcomes for youth (e.g., Bayer, Grossman, & DuBois, 2015; Grossman & Rhodes, 2002; Rhodes, 2005). It may be important, however, to balance the potential benefits for monitoring with the demands that it may place on staff and mentors. This may be especially important in the case of volunteer mentors as the burdens associated with different forms of monitoring (e.g., structured reporting on activities engaged in during each mentee outing) could conceivably detract from program enjoyment and satisfaction and thus prove counterproductive (e.g., increase risk of early mentor attrition).

    With regard to evaluation, a review of research on prevention and health promotion programs for youth more generally found that better implementation, particularly with regard to fidelity to the program model and dosage of program for individual participants, has been associated with better outcomes for participants (Durlak & DuPre, 2008). Program evaluation has the capacity to improve overall program effectiveness through a number of avenues (Metz, 2007). Process evaluation findings, for example, may help identify areas of a mentoring program (e.g., mentor training, match support contacts) that are not functioning as well as desired or being experienced positively by participants as well as promising directions for addressing such concerns through improvements in the program’s design or implementation. Impact evaluations may likewise offer useful insights to guide strengthening of a program, such as the degree to which particular types of youth outcomes are being improved or the extent to which benefits are consistent across key subgroups of youth (e.g., boys and girls).

    Corresponding Elements of Effective Practice:

    This practice is most relevant to the area of Monitoring and Support within the Elements of Effective Practice.

    Key Personnel:

    The successful implementation of program monitoring may require a range of staff (from those in direct case management roles to staff in supervisory or management positions) to have strong foundational understanding of the benefits and use of such monitoring as well as training and support in the monitoring practices relevant to a particular program or agency. Program evaluation, on the other hand, is likely to require significant involvement from a smaller subset of program staff who would require this kind of fundamental understanding, regardless of whether evaluation is conducted internally or under the auspices of an outside evaluator. Frontline staff nonetheless may still require training and ongoing support to ensure that any information that they are asked provided in support of evaluation is accurate and reliable.

    Additional Information:

    None.

  • Evaluation Methodology:

    DuBois and colleagues (2002) examined the practice of monitoring of implementation in a meta-analysis of 55 youth mentoring program evaluations. (Meta-analysis is a technique for synthesizing and summarizing findings across evaluations of similar, but not identical research studies. One question often addressed in meta-analyses is whether the effects of a certain kind of program, like youth mentoring, differ based on the specific types of practices that are utilized. A correlation between the use of a practice and program effectiveness does not, generally speaking, provide strong or definitive evidence of a causal effect of that practice; programs that do or do not utilize a particular practice may differ in other important ways, for example, not all of which can be controlled for statistically.) Analyses were based on 59 independent samples because some studies contributed more than one sample. To be included, the evaluations needed to utilize a two-group randomized control or quasi-experimental design (15 and 26 samples, respectively) or a one-group pre-post design (18 samples). The meta-analysis included a comparison of effect sizes on youth outcomes between programs that included monitoring of implementation (15 samples) and those that did not (44 samples). Prior to this analysis, effect sizes were residualized on study sample size and evaluation design to control for these methodological influences. Further, multivariate analyses examined whether monitoring of implementation earned entry into a best-fitting regression for predictors of effect size; one regression considered 11 features of programs suggested to be important on the basis of theory and a second focused on 7 program characteristics that reached or approached statistical significance as moderators of effect size in the meta-analysis.

    All analyses were conducted under the assumptions of both fixed and random effects models. Effect sizes corresponded to differences on youth outcome measures at post-test or follow-up between program and comparison/control group youth (or, in the case of evaluations with single-group designs, differences between pre-test and post-test or follow-up scores for program youth). The specific youth outcomes assessed varied by evaluation and could fall within any of the following domains: emotional/psychological, problem/high-risk behavior, social competence, academic/educational, and career/employment.

    Evaluation Outcomes:

    Youth Outcomes
    Programs that engaged in monitoring of implementation had larger estimated effects on youth outcomes than those that did not engage in this practice. This difference was statistically significant in both fixed and random effects analyses. In the latter, random effects analysis, programs utilizing monitoring of implementation had a larger estimated size of favorable effect on youth incomes (.19; 95% confidence interval: .12 to .26) than those that did not (.06; 95% confidence interval: -.08 to .20). In practical terms, the effect size found for programs utilizing monitoring of implementation corresponds to the average youth in a mentoring program scoring approximately 8 percentile points higher than the average youth in the non-mentored comparison group; in comparison, this difference is only 2 percentile points in the case of programs which did not include monitoring of implementation (Cooper, 2010).

    Additional Findings
    In multivariate analyses that considered numerous program practices together, monitoring of implementation did not earn entry into best-fitting regressions for predicting program effect sizes under either the assumption of random effects or of fixed effects.

  • External Validity Evidence:

    Variations in the Practice
    Information on the types of monitoring of program implementation that were implemented in the programs that served as the focus of the reviewed meta-analytic study is lacking. There was likely variation across programs in the kinds of information collected for monitoring purposes as well as the ways in which monitoring was conducted within the program. However, the study did not test for differences in the effects of monitoring of implementation as a function of these variations. In addition, there have been no studies to date examining the potential effects of program evaluation on youth or other outcomes in mentoring programs.

    Youth
    Youth served by the programs that were the focus of the meta-analytic study reviewed were from a variety of backgrounds and had varying levels of environmental and individual risks. All programs targeted youth who were elementary, middle, or high school-aged (under 19 years). The meta-analysis, however, did not test for differences in effect of program monitoring of implementation in relation to these types of youth characteristics, thus making the applicability of findings to different subgroups of youth unknown.

    Mentors
    Mentors represented in the meta-analytic study reviewed varied in terms of age, gender, race and ethnicity, professional background, and whether or not they received payment for their role. This study did not test for differences in effect of monitoring of program implementation in relation to mentor characteristics, thus making the applicability of findings to different subgroups of mentors unknown.

    Program Settings/Structures
    The mentoring programs represented in the meta-analysis were one-to-one in format and predominantly community- or school-based (although a smaller number were situated in a workplace or other setting). Understanding the effects of program monitoring and evaluation across the broader spectrum of potential program structures and settings (for example, group mentoring and community site-based programs) is therefore limited.

    Outcomes
    The meta-analytic study included youth outcomes in a variety of different domains (e.g., problem behavior, academic, emotional/psychological); however, there were no tests for possible differential impacts of monitoring of program evaluation depending on the domain or type of youth outcome.

  • Resources Available to Support Implementation:

    Resources to support implementation of program monitoring and evaluation can be found under the Resources section of this website. These resources include: Generic Mentoring Program Policy and Procedure Manual, Going the Distance: A Guide to Building Lasting Relationships in Mentoring Programs, Imua, and The ABCs of School-Based Mentoring. The Learning Hub section of this website includes the Measurement Guidance Toolkit for Mentoring Programs, which provides recommended instruments for measuring youth outcomes as part of program evaluation as well as risk and protective factors that may be relevant to these outcomes.



















  • Evidence Base:

    DuBois, D. L., Holloway, B. E., Valentine, J. C., & Cooper, H. (2002). Effectiveness of mentoring programs for youth: A meta-analytic review. American Journal of Community Psychology, 30, 157-197.

    Additional References:

    Durlak, J. E., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327–350. doi:10.1007/s10464-008-9165-0

    Grossman, J. B., & Rhodes, J. E. (2002). The test of time: Predictors and effects of duration in youth mentoring programs. American Journal of Community Psychology, 30, 199-219. doi: 10.1023/A:1014680827552

    MENTOR: The National Mentoring Partnership. (2015). Elements of Effective Practice for Mentoring (4th ed.). Boston, MA: Author.

    Metz, A. J. R. (2007). Why conduct a program evaluation? Five reasons why evaluation can help an out-of-school time program. Child Trends Research-to-Results Brief #2007-31. Retrieved from http://bit.ly/2g2WKR6

    Rhodes, J. E. (2005). A model of youth mentoring. In D. L. DuBois & M. J. Karcher (Eds.), Handbook of youth mentoring (pp. 30-43). Thousand Oaks, CA: Sage.

    Rossi, P. Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks, CA: Sage.

Insights for Practitioners

Click here for additional insights and tips for those working in, developing, or funding programs that may use this practice.

Request no-cost help for your program

Advanced Search