National Mentoring Resource Center Blog

To leave comments on the blog, please enter your name and e-mail address in the fields below. All posts are moderated and will appear after they are approved. Before posting, read the user guidelines

Improving Mentoring by Improving Mentor Training

AUGUST 22, 2018
BY: SAM MCQUILLIN, ASSISTANT PROFESSOR OF PSYCHOLOGY, UNIVERSITY OF SOUTH CAROLINA AND NMRC RESEARCH BOARD MEMBER

In 1964, the famed American psychologist B.F. Skinner predicted how education might change by 1984. Reflecting on the precedent for this prediction, he wrote “Improving education seldom takes the form of improving teaching.” I was confused the first time that I read this quote. What he meant, as I understand it, was that efforts to improve education as an institution rarely come from efforts to improve the conditions in which people learn (i.e., teaching). He argued that many educational improvements come from finding better teachers, enhancing the aesthetics of curriculum, teaching more of what is necessary, and less of what is not, and expanding education through mass media. He thought good and well of these efforts, but was perturbed by the fact that very little educational improvements focused on improving, or even using, the science of how people learn and, maybe more importantly, forget. He went on to argue that science has something to say about how people learn, and it’s a shame that we don’t use that science to improve what people know and do. In 1984, if we were to heed his advice, we would be able to teach students more with greater efficiency.

I was reminded of this quote several years ago when I was revising the mentor training protocol for my school-based mentoring program, the Academic Mentoring Program for Education and Development (AMPED). We had just obtained pretty disappointing results in an evaluation of our program, and I was tasked with improving what our mentors know and do when they are with their mentees. I noticed that most of my improvement efforts focused on changing exactly what worried Skinner: I was trying to improve the training by changing the look and aesthetic of the training manual, removing some content and adding others, and automating some of our training through web-based mediums. I reflected on the fact that I was paying very little attention to some basic scientific facts about how people learn and forget.

For example, in one of the first psychological experiments ever conducted, Hermann Ebbinghaus observed that people learn better from distributed teaching (teaching spread out over time), rather than massed teaching (teaching all at once). He found that breaking training events up in shorter segments and spreading them out over a longer period of time was much more effective than fewer and longer training sessions. He also found that people forget what they learn at an exponential rate if they don’t have an opportunity to practice what they learned (called the Forgetting Curve). This was a remarkably robust finding for the time; the phenomenon exists in both human and non-human animals, and it is one of the few psychological facts that survived psychology’s early research programs. Unfortunately for the mentors and mentees in our program, the lion’s share of our existing mentor training was conducted through workshop style up-front training “binges”. We would corral mentors in a room and teach them until they were exhausted, and then we’d hope and pray that they would remember in the months following after they first met and then mentored their mentees. Not surprising in hindsight, we found that they didn’t do the things we taught them to do, and they did many of the things we taught them not to do.

We later revised our training to emphasize briefer trainings spread out throughout the course of the mentoring relationship. This ongoing training would rehash things learned in the upfront trainings using examples relevant to mentors’ match, and other trainings would introduce concepts that might only be relevant for one or two meetings. In subsequent evaluations of our program, we noticed remarkable improvements in mentors’ appreciation of the program and training, and in the effects of the program on students’ outcomes. Mentors felt better about the program, appraised the training as more valuable, and planned to continue mentoring in the program longer than they did in the previous iteration of the program. Encouraged by these results we further refined our trainings by providing opportunities to practice and receive feedback on key skills throughout the mentoring relationship, another strategy derived from basic facts in learning sciences. We’ve see similar success in these revisions.

More recently, I’ve been analyzing data from the MENTOR National Mentoring Program Survey that Mike Garringer, Heather McDaniel, and I conducted back in 2016. In these analyses, I am using machine learning to try and predict why some programs have stronger matches than others. In the survey, we were able to engage almost 1,500 programs in reporting a number of data points about their programs, including information on who their mentees were, how much money it costs to operate their programs, the type and form of training they provide, etc. We also asked them to report on one indicator of potential program effectiveness: the percent of matches that meet the expected duration of match length. This is an important outcome in part because researchers suggest that premature termination of relationships increases the risk that mentoring will be harmful, and decreases the likelihood that children will benefit from mentoring. In my statistical model, I allowed nearly 30 covariates to compete against each other in predicting premature match closure. Using this model, I was able to identify the single strongest predictor of successful matches: ongoing training and support for mentors. This predictor out-performed other predictors including whether or not program youth were criminally involved, the amount of money the program spends on each youth, the racial and demographic makeup of youth, and the type of program model, among other predictors. One of the least important (and, in this model, statistically non-significant) predictors was up-front pre-match training duration. This finding maps on pretty well to my own personal experience, as well as the experiments I’ve conducted, and has an uncanny connection to Ebbinghaus’s insights.

Although most programs do not have as specific of a curriculum as the program I operate, I think I would be hard pressed to find a program that doesn’t have at least some things they want mentors to do, and other things they would like for them not to do. And, I would guess that at least some of these mentor behaviors are not learned in one training. If these assumptions are not too farfetched, I would advise most programs to look at ways of spreading their training out over the course of the relationship and to try to closely align the timing of the training experiences when mentors are expected to perform the desired skills or behaviors.

Although Skinner was wrong on almost all of his predictions about 1984—for example, he thought we wouldn’t have illustrations in our books anymore—in the decades since, scientists have discovered new insights in how people learn and remember, and the field of mentoring might be wise to take heed of some of these findings to improve how we support mentors and mentees throughout the course of their relationship.

Comments (2)

This comment was minimized by the moderator on the site

Thanks for sharing. This is a great reminder of the importance of ongoing training. Our program started using "Flash Trainings" (30 minute trainings from 5:15 - 5:45) that address just one issue that mentors have questions about.

This comment was minimized by the moderator on the site

Good stuff, Dr. McQ!!

There are no comments posted here yet

Leave your comments

  1. Posting comment as a guest.
0 Characters
Attachments (0 / 3)
Share Your Location
Type the text presented in the image below

Request no-cost help for your program

Advanced Search