Colleges and universities implement analytics programs in order to support a wide variety of institutional goals, including recruitment, student success, and academic program management to name a few.
However, in my experience, few institutions are using analytics in any of these areas to measure and manage one of the most critical metrics related to evaluating these efforts: financial return on investments.
That is, are the programs and practices that are in place not only effective, but are they financially efficient? Provosts and other academic leaders can use such analytics to make sure they are getting the most out of their resource investments.
I suspect that one of the key reasons that financial ROI analysis is uncommon is that financial incentives are not always clear-cut for mission-driven non-profit institutions. In the larger for-profit economy, ROI can be largely measured through profit-and-loss statements.
However, nonprofit higher education institutions are using all available resources to optimize and balance a variety of competing mission-critical priorities such as broad access, student success, research production, and educational quality.
Measuring financial ROI for mission-driven goals can seem like a futile exercise by attempting to put a dollar value on intrinsically non-financial values. For instance, what is the dollar value of increasing access for historically underrepresented populations?
How much is the “right” amount to invest in student success? Or how much monetary value does a highly experienced tenured professor bring to the classroom?
Measuring student success ROI
In many cases, we can use analytics to measure the financial returns on our investments without needing to first answer such imponderable questions. Instead of measuring absolute ROI, which may or may not exist, we can measure relative ROI.
That is, how efficient is a given process or initiative compared to other possible investments? If we can measure the costs and the returns, we can compare ROIs. Let’s explore a few examples.
Monitoring and improving the admissions process is quite possibly the most common use case for analytics in higher education. Perhaps this is because the financial incentives in this context are fairly straightforward. To grossly oversimplify, the goal is to enroll a class of the desired size and composition for a minimum cost.
More from UB: Find your college’s share of American Rescue Plan funds
While many institutions employ sophisticated schemes for identifying prospects and tracking students through the admissions process, financial aid packaging often receives significantly less analytic scrutiny. Not only is financial aid one of the most significant cost factors in student recruitment, but it is also one of the most consequential for determining student yield.
My team and I have worked with numerous institutions to quantify these effects. In nearly every case, we have seen that financial aid is awarded in a sub-optimal way for maximizing both yield rate and net tuition revenue (and usually for class composition as well).
With these analyses, we can quantify the ROI of any recruitment or yield program by comparing its costs and effects to the effects that the same money would have if spent on optimally targeted financial aid.
We can even compare the effects of various financial aid programs, by determining and comparing how each would affect the incoming class. We are thus able to create a meaningful ROI metric for all of our recruitment efforts.
Improving student success is a strategic goal of nearly every higher education institution in the country. Consequently, student success initiatives, retention committees, peer support networks, and other similarly targeted programs abound.
While most institutions are able to closely monitor retention and other success rates (and many utilize statistical methods to measure effectiveness), I have encountered few institutions that regularly measure their financial efficiency. This is probably because there is little baseline to understand what is an efficient investment.
A solution for measuring student success ROI once again lies in financial aid. Although many institutions use financial aid primarily as a recruitment tool, it is also one of the most important factors determining students’ retention and graduation.
In doing the analysis, institutions can quantify the effect that financial aid dollars have on persistence and success rates. With these metrics in hand, all student success initiatives have a straightforward ROI metric: how effective is any program compared to the effectiveness of the same dollars spent on targeted financial aid?
Tracking instructional activity
For most institutions, the single largest expense item is the salaries it pays to instructional faculty. Surprisingly, few institutions maintain robust data sets on faculty instructional activity.
The reasons for this dearth of data are manifold, but the two most prevalent reasons are seeking to avoid micromanaging individual faculty and allowing departments to make course staffing decisions locally.
Both are laudable goals, but neither precludes campuses from maintaining reliable data systems for strategic decision making, like building an instructional data set that allows the measurement of both instructor costs (taking into account the myriad of non-instructional faculty activities) and student tuition revenues at a granular level.
These data allow ROI measurements through direct financial contribution margin calculations of different programs and departments. While institutions may have a few mission-critical programs with negative margins, it is critical that they be able to manage these costs through positive margins programs elsewhere at the institution.
Measuring ROI is a critical element of making smart and effective financial decisions. With the right analytics tools, ROI can be measured for investments in mission-critical areas that are not amenable to traditional profit-and-loss arithmetic.
With the improved financial decision-making that it enables, almost every institution will find that the development of a robust analytics program has a very strong ROI and will more than pay for itself through more efficient processes and investments.
Craig Rudick is the former executive director of institutional research and lead data scientist at the University of Kentucky. He is now director of product strategy and data scientist at HelioCampus.