Accuracy matters: How colleges can avoid misreporting with reliable data

Properly maintaining and delivering critical information to agencies can help institutions avoid missteps and potential liability, says principal at advisory firm Baker Tilly.

No matter how diligent an institution might be about maintaining and keeping financial information and other data, when it comes to reporting it, inaccuracies do occur. Sometimes they get flagged as flagrant issues of misreporting, while others have been unintentional.

Either way, the stakes are high for those to comply accurately, whether that ask comes from a federal agency such as IPEDS or from rankings services such as U.S. News & World Report. The situations at Columbia University, the University of Southern California and Temple University should drive leaders to ask tough questions, assess their processes, and potentially reach out to partners who can assist them.

“We spend a lot of time helping institutions hopefully stay out of the spotlight,” says Adrienne Larmett, principal at Baker Tilly, a leading advisory CPA firm that works with more than 400 colleges and universities. “Data reporting has been an area of focus from a risk perspective to institutions for a while, for so many reasons. Our goal as internal audit risk advisors is to prevent adverse events. And if something has happened, and you’ve ended up in the press, we help you improve your processes to remediate those issues, so you don’t continue to end up there.”

To avoid those issues, colleges must ensure that dozens of procedures are buttoned tightly, that stakeholders involved in reporting understand what that information means and how and when it should be disseminated and that its system that keeps that data is updated.

To learn more about the impacts of misreporting and ways institutions can avoid it, University Business sat down for a conversation with Larmett, who serves solely higher education clients at the firm:

How serious can the consequences be for misreporting of data, whether intentional or not?

The type of misreporting, and the recipient of misreporting really dictates severity of the consequences. Broadly speaking, potential misreporting can lead to the possibility of:

  • Loss of rankings, which can impact your ability to recruit students. If you’re not on a list, you may not be searchable to potential [students].
  • A reputational hit right away, regardless of what the root cause is.
  • Civil and criminal penalties, even if it’s unintentional, because of consumer protection laws, plus civil fines or class actions.
  • Costs related to litigation, including settlements.
  • Fines imposed on institutions by ratings agencies or the accreditors.
  • Costs to remediate. Institutions don’t want this to happen, so they’re going to have to stand up infrastructure, people and systems improvements.
  • Settlements that require annual audit of data before it goes out.
  • Loss of accreditation and ability to grant degrees. If you can’t grant degrees, you’re not going to have those students. Think about the long-term impact on tuition. If you can’t get the students in, you aren’t receiving the tuition revenue.
  • Governmental inquiry and audits. Once you have some type of misreporting, it could lead to government inquiry in other areas.

How does misreporting happen and where does it most frequently occur?

Schools by and large are very aware of the benefits of the rankings and the pitfalls of misreporting. Through our audits, we see certain trends and themes that could lead to misreporting. I’ll break it down in terms of people, process and technology. From a people standpoint, reasons could be no central oversight. It could just be one component of someone’s many duties and responsibilities. It could be decentralized. It could be many people are contributing to the data, and it’s compiled and sent out the door with little secondary review. Sometimes it’s inconsistent roles and responsibilities. Some people don’t know what they should be doing, or there is a lack of formality around roles and responsibilities.

There could be a lack of formalization around the processes for data extraction. Compilation could be human error, or inconsistent attention to detail. The same person might be pulling the data, preparing the data and then submitting it. There might not be that secondary set of eyes. Documentation retention: we may not keep that information readily available, or it’s not stored centrally. Varying interpretations of survey questions: maybe the question doesn’t give us enough detail, so we’re using our best judgment. Inconsistent time periods: we have that same point year to year to answer the questions, and that allows for consistency. But sometimes people don’t do that. If you don’t catch that, that can lead to misreporting.

The third component is technology. Oftentimes, data is stored in different systems owned by different people with different query tools. Where’s the data coming from? Is the data and are the system complete and accurate? There might be breakdowns in the flow of information where the data feeds into the system of record, and it doesn’t get all the way over. Data reporting isn’t as simple as, here’s this question from US News, and I’m going to answer it. It could sit in many different places, owned by many different people.

US News and other agencies all have different methodologies. Do they bear some culpability in misreporting of data or are institutions fully to blame?

I wouldn’t say U.S. News or the other recipients of data are culpable. There’s an opportunity for ranking services broadly to provide more consistency and clarity in the way that they ask questions and the support they provide. Just having that responsiveness is important. Schools want to report accurately. They know the consequences are severe, but they don’t always know how to respond. The way the question is being asked, it might not align with the way the data is maintained. Schools are maintaining data to answer specific institutional objectives, not necessarily to answer to a rankings agency. Think about the cost of attendance. A survey asks, what is the cost of attendance? Well, how do you define that? Is it tuition? Is it tuition plus room and board? Is there a stipend on top of that? Almost every survey wants to have a report on gender. What if your college keeps data by male and female and nongender conforming, but the survey only asks for male and female?

We’ve seen instances where institutions refuse to take part in certain agency rankings for whatever reason. Could that become more of a trend?

There’s been an ongoing debate for years about the value and the reliability of rankings, especially given the misreporting cases and the potential to manipulate data to gain more favorable rankings. Given the competition for a decreasing population of students, the reputational boost and increase in potential applications that the increase in rankings can bring – and the boost in revenue – I don’t think I see ranking, or participation going away. We’ve been studying this for a long time. The better we can make ourselves look, again we want to do it correctly, appropriately and accurately, it increases our ability to recruit. It’s a sense of pride. If they’re able to bring more students in, that’s a big incentive to continue them.

What are some of the ways colleges can remedy problems of misreporting?

The most basic thing you can do is establish the right tone at the top. Institutional leadership must establish and reinforce the importance of data integrity and accuracy. They have to come out and say we report accurately, we believe in the integrity of this information, and we’re doing what we need to do to ensure that what we’re saying is completely accurate. Have your president, your provost or CFO come out in support of this. The next thing is establishing a dedicated infrastructure for data reporting, making sure there’s appropriate oversight of data preparation review, and making sure that the people that are within that infrastructure, they’re dedicated and knowledgeable. And that the staffing size is scalable to how much you respond to. Document everything, how you extract data, how you interpret data, how you’re reviewing questions, who is doing that, who’s going to approve that. One of the best practices we continue to see is a data dictionary of commonly used interpretations or definitions. If people come and go, we can at least understand how the institution approaches these things. And lastly, understanding your system capabilities. How is that data maintained? How does it enter the system? What is the flow of information? Know where everything is kept. Make sure everything is complete. Monitor and test for accuracy before data goes out. Are there any variances? It’s going to give you a good sense of where you are.

Chris Burt
Chris Burt
Chris is a reporter and associate editor for University Business and District Administration magazines, covering the entirety of higher education and K-12 schools. Prior to coming to LRP, Chris had a distinguished career as a multifaceted editor, designer and reporter for some of the top newspapers and media outlets in the country, including the Palm Beach Post, Sun-Sentinel, Albany Times-Union and The Boston Globe. He is a graduate of Northeastern University.

Most Popular