Coping with college-ratings uncertainty

Institutions can prepare to prosper under administration’s proposed ranking system
By: | Issue: April, 2015
March 23, 2015

Fueled by government agendas, national press and public opinion, higher education has in recent years come under increased scrutiny in the form of calls for heightened transparency and accountability.

Some of the U.S. Department of Education’s initiatives in response include:

  • The College Affordability and Transparency Center, which provides searchable lists of institutions by largest and smallest tuition increases and highest and lowest net price
  • Gainful employment regulations intended to protect students in career training programs from incurring unmanageable debt coupled with poor employment prospects
  • A net-price calculator requirement, giving families the opportunity to estimate net costs after financial aid rather than assess college cost based on sticker price alone

President Barack Obama’s new College Rating Plan seeks to “strengthen the performance of colleges and universities in promoting access, ensuring affordability, and improving student outcomes through the design of the college ratings system.”

Many institutions have struggled to meet the goals these initiatives originally envisioned.

For example, gainful employment regulations have been vigorously debated and revised, and a diluted version of the original proposal was released last fall.

Metrics in the original plan—such as student loan default rates—were removed, while others were softened.

For example, the threshold for annual loan payments as a portion of wages was lowered from 12 percent to 8 percent.

Why? The challenge, in large part, was in crafting standards applicable to different types of institutions.


Net price calculators have yet to gain traction as a primary means of delivering transparency about affordability. Sallie Mae’s “How America Pays for College 2013” reports that 40 percent of families eliminated schools based on cost before conducting any research, suggesting that they are not making use of the calculators.

Why? A high degree of variability in the tools—from the amount of information collected, to the year of data in use in displaying results—creates wide variations in accuracy and makes direct comparisons between institutions challenging.

In addition, many calculations are complicated to complete, or even just difficult to find on institutions’ websites. Without strict standards and robust education programs about the information they provide, the calculators fall short in demystifying the question of sticker price versus net price.

In a similar fashion, much uncertainty remains about what the proposed college ratings system will measure. For example, the U.S. Department of Education has confirmed only that the ratings will differentiate between two- and four-year institutions.

But will there be further control for differences between institutions, such as location, program mix, selectivity and the type of students served? What about institutional mission? The department has discussed creating a statistical model to adjust for some of these factors, but this has already proved an area of contention.

Likely metrics

More clarity is available on metrics that will not be included: level of civic engagement, job placement rates, student satisfaction and average loan debt. With IPEDS accounting only for first-time, full-time students, and NSLDS collecting information only on federal loan borrowers, the college ratings system adds fuel to the debate over the need for a federal student unit record system to more effectively track student outcomes.

The intention of the rating system is to “use these newly available data to invest federal student aid where it will do the most good.” And where will that be? Presumably at the institutions with the most impressive student outcomes that also provide the greatest access.

These would likely be institutions with the greatest resources, although there is also discussion of including a yet-to-be-determined measure of improvement to give less privileged institutions opportunities to garner favorable ratings.

Degree-granting institutions will be rated as low-, middle- or high-performing, although it has yet to be determined whether institutions will be given an aggregate score or ratings for each of the metrics under consideration. (A full description of the proposed ratings metrics is available via the U.S. Department of Education’s website.)

Here are the proposed measures related to access:

  • Percentage of students receiving Pell
  • Average gap between Expected Family Contribution and Cost of Attendance not covered by gift aid
  • Percentage of enrolled first generation college students

Will ratings system’s metrics encourage institutions to increase access and invest in students who are otherwise typically more costly to enroll? What about students with estimated family contributions that are just above the Pell eligibility threshold, and whose discount rates are typically highest as they do not benefit from federal grant support?

These measures are related to affordability:

  • Average net price
  • Average net price by family income quartiles (for federal aid recipients only, since that is the only data available)

In addition to reflections of affordability, are these measures also balanced with fiscally responsible, sustainable discounting practices?

These measures are related to outcomes:

  • IPEDS graduation rate
  • Alternate completion rate based on NSLDS data
  • Transfer-out rates
  • Student earnings
  • Graduate school attendance rate
  • On-time loan repayment rate

And so continues the debate about the purpose of a college education—these measures of degree and employment attainment are more quantifiable than are less tangible metrics, such as civil and social responsibility, critical thinking, problem-solving, teamwork and personal growth.

But will the metrics on student earnings encourage institutions to shift academic offerings into high-salary areas such as engineering and computing? Are higher average starting salaries of STEM field graduates an accurate reflection of success compared with the lower earnings of serving professions, such as teaching, social work, and ministry?

And how will the data be collected—some states already track data on outcomes, but only for graduates who work in-state. Will institutions be responsible if a federal unit record database is not created?

Whatever the method, robust earnings metrics will represent an improvement over the proprietary salary ratings that only make use of data volunteered by recent grads.


While much has already been written on the topics covered above, and more questions than answers still remain, here’s what institutions can do to prepare:

  • Understand what might constitute a peer group for comparison. This is not necessarily direct competitors for applicants, but institutions that “look alike” in terms of setting (urban, suburban, rural), size of enrollment, percent Pell-eligible, and SAT/ACT scores. College Results Online has a helpful search tool for establishing a peer group.
  • Evaluate the effectiveness of current financial aid strategies. Are funds being used as strategically as possible? Is there room to invest in “higher cost” populations, or is the current discount rate unsustainable? Have new aid initiatives failed to produce desired results? Does the discount rate continue to increase while yield is on the decline?
  • While the Obama administration has made it clear that the ratings are meant to be measures of performance and not rankings, institutions that have higher ratings are likely to quickly advertise their “advantage.” Providing robust information about outcomes is the best way to make the case for an institution’s value. Evaluate your institution’s outcomes data. Ideally, you can promote metrics such as placement rates and starting salaries. Rather than waiting for the administration to determine what constitutes favorable outcomes, be proactive.

Focus on access

If the proposed college rating system goes the way of some of its forerunners in accountability and transparency—meaning it lacks clear, understandable, standardized criteria based on sound data—it may not deliver the intended impact.

However, the national focus on access and outcomes is only likely to increase, and institutions would be well-served to take charge of their own data and messages on these fronts. Making the case for affordability, value, and benefits has never been more important in attracting and retaining students.

Jennifer Wick is vice president of Scannell & Kurz higher education enrollment consultants, a Ruffalo Cody company.