The goal of retaining students—and seeing them through to degree completion—has become the focus of numerous research initiatives, articles, technology solutions and intervention strategies. While most of the studies reach very different conclusions, most researchers seem to agree that no one factor holds the key to student persistence. How can higher education leaders accurately predict attrition risk using the same factors and weightings across all student cohorts and across a wide range of types of institutions? In this web seminar, presenters discussed the factors that should be considered when predicting risk for different types of institutions—and distinct student subgroups within each institution—and how continual monitoring and annual adjustments to the algorithms can have a positive impact on student retention rates.
Director of Analytics and Research
Retention is a complex problem because we’re talking about many different students and many different types of students. They come to us with a variety of backgrounds, levels of preparation, goals and motivations. There are many factors that affect why a student stays, and why a student leaves an institution.
There are five common strategies to address retention issues. These strategies are not necessarily chronological stages and are not necessarily mutually exclusive. They can be used in combination or they can be employed in a different order.
Strategy #1 is to find a simple solution, a silver-bullet approach. We want something that has some newness and some flash; we want something that’s narrow, manageable, supplementary and not disruptive. Maybe we think about something like a session at orientation, or a new position, or a new course.
The problem with using this strategy alone is that these simple strategies often aren’t enough. They often don’t create significant or long-term changes either because they are narrow or because it is difficult to maintain momentum. These approaches can also be difficult to scale, or they get displaced by the next new idea.
Strategy #2 is to find more data. This approach is different from Strategy #1 because instead of starting with a solution, we seek to understand the issue. If we know more, then we can better target our interventions.
This approach has intuitive appeal because we have lots of data on our campuses that may be important—things like admissions and student records. These records hold key information: transfer credits, placement test scores and high school records. Any of that information can speak to a student’s preparedness and help us understand how they will do on our campus. Enrollment records also tell us something about persistence, academic paths, and whether students are on target to graduate.
Surveys are another important source of information because records can miss issues such as motivation, goals, fit, and behaviors. The most efficient method to know if a student is thinking about leaving an institution is to ask them. So surveys are important and should be combined with other data sources.
But, more data can quickly become overwhelming, particularly if I’m the person trying to look at hundreds of students. All those data sources become additional points that need to be evaluated and considered. Because every item does not affect risk equally, making sense of so many items quickly becomes a complex task.
Strategy #3 is to find an expert model, which makes sense of multiple data sources. This approach includes the benefits of both theory and analysis. The model can sort through patterns and help us prioritize which students need support.
There are lots of expert models out there. The dilemma is that your campus may be different than other campuses. Imagine using the same model for identifying students in need of support at a highly selective and an open-enrollment institution: The model might flag hardly any students at the former and flag too many to be useful at the latter.
Strategy #4 is to find customized models that do a better job at predicting risk. There is clearly some benefit to having multiple models that are customized to a campus. Custom models also allow campuses to include data specific to their campus or their students, which can improve the accuracy and validity of the predictions.
The dilemma with using a customized model as the sole piece of a retention strategy is that the best models don’t make a difference if people don’t use them. Even if you have a phenomenal statistician or team, if nobody is paying attention to the numbers, then you’re not going to make a difference. We have to take the information that customized models create and package it into some sort of useful format.
The best strategy is Strategy #5: a holistic solution that uses customized models to make sense of the information that we have. It is a system that helps professionals perform the tasks they need to do. That holistic solution could include a combination of some of the simple solutions, or it could include things like referral and logging notes, and using them in conjunction with the information provided by the predictive models.
The system has to include the risk information in a way that can be understood. Then the models end up in a good solution that is part of an effective implementation. The models end up behind the scenes. They are a critical piece, because they direct our efforts and make us more efficient, but they become part of a bigger solution.
Director of Retention Services
EBI MAP-Works Campus Coordinator
Murray State University
The holistic and customized model that MAP-Works has provided helps us direct our efforts efficiently. Two components that have been highly beneficial to Murray State University are the referrals and the risk indicators.
The referral component of MAP-Works is our bread and butter because of the way it has allowed us to create and sustain productive relationships with our faculty, to intentionally intervene with students on a specific issue, and to localize the accuracy of the risk indicator, meaning we can show that it works for Murray State University and it’s not just some abstract national data point that does not work on a local level.
Referrals work much like an early alert system. A concerned faculty member sends a notification regarding a specific issue with a specific student. There are a number of offices and people around campus that can receive referrals, but about 85 to 90 percent of all the referrals are sent to the retention office.
We reach out to that student based on the information contained in the referral. We then follow up with the faculty member who issued the referral and share with them, if we can, what we learned from the student. The follow-up has enabled us to sustain relationships with faculty. Our faculty are genuinely concerned about the welfare of the students, and they want to know that their efforts to help have not been wasted. So we share with them what we learn, and they in turn continue to submit the referrals.
When we receive a referral, we then use the “intrusive advising” model, meaning that we reach out to them first, and we reach out to them deliberately instead of generally. For example, I won’t just ask standard questions like, “How’s it going? How are your classes?” Instead I ask, “You’ve missed Quiz 3 and Test 2 in Math 140. Is everything okay?” Or, “Your advisor shared with me that you are homesick and that you are considering transferring. Can we meet and talk about some options for you?”
When we are able to meet with the students, we then utilize the “appreciative advising” method, which begins with positive, open-ended questions and establishes a safe rapport with the student. Many of our conversations start with this: “Tell me your story.” This approach usually allows us to see what’s important for the student. And then we’re able to co-create a plan for what their next steps ought to be.
The risk indicators are holistic and customizable. They are based on predictive analytics that are grounded in a long history of research. They’re color-coded like a traffic light, so they are easy to understand. So if the student is a risk indicator color of green, that’s good; yellow, there are reasons for concern; red, they are likely to be in trouble.
The reporting function of MAP-Works is robust and it’s accurate. We trust the importance of the referral and we trust the risk indicator. We continually encourage our faculty and our staff to submit referrals and to submit them as early as possible. We’ve learned that sharing our data gives strength to our stories and it underscores the importance of intentional intervention.
To watch this web seminar in its entirety, please go to www.universitybusiness.com/ws120414