Artificial intelligence is transforming higher education, influencing recruitment, research and classroom experiences. But while we tend to talk about AI as if it’s a single, monolithic technology, that’s of course not the case.
AI encompasses a huge variety of algorithms that range from simple, rule-based systems to hugely complex, deep-learning models. On one end of this spectrum are simple algorithms whose logic can be easily understood and communicated—such as those that sort emails. On the other end are impenetrable “black box” systems, whose decision-making processes are so intricate that even their creators struggle to explain how they arrive at conclusions.
Today, colleges and universities too often tend to treat all AI tools as the same, in part because too many people making decisions about the technology do not know exactly how it works. Institutions are delegating human tasks to machines while not maintaining control or oversight, and these automated—and often unexplainable—decisions are affecting students’ lives in ways both good and bad.
Institutions introducing AI have an obligation to deploy it responsibly and explain its workings clearly to faculty, staff and students. Simply put, institutional decision-making around AI should center on “explainability”—that is, ensuring that the way a given AI tool works is transparent and understandable to humans.
More from UB: The end of in-person learning? Setting higher ed’s online goals for 2025
When we talk about AI, it’s immediately apparent that there’s a tension between the quality of its output and its explainability. Most AI tools draw on a series of algorithms that instruct machines to analyze data, perform tasks and improve over time in ways that resemble human intelligence. It’s relatively simple to describe an automated process that makes music and movie recommendations based on an individual’s previously expressed preferences, but it’s nearly impossible to explain the inner workings of an algorithm that churns out essays after scanning billions of pages of written work. The decision-making processes of these deep-learning systems are locked away in an impenetrable black box. Even the builders of these large language models lack the keys that allow them entry.
What makes AI so powerful is that it’s able to sift through enormous amounts of digital data much faster than humans. But while AI that is confined and structured can be useful in higher education settings, AI that is unpredictable, unmanageable and unexplainable has the potential to lead institutions down a dangerous road where accountability goes missing. Some academic integrity tools, for instance, have falsely flagged student work as AI-generated, leading some schools to switch off these detection capabilities altogether.
Of course, appropriate uses of black box AI do exist in the classroom. Deep-learning systems powered by generative AI are great for coaching students to write better without doing the writing for them. Because the ultimate decision of whether to accept or reject the writing advice is up to the student, explainability isn’t as crucial—it’s more important in this case to judge the quality of advice than to be able to explain how the algorithm came up with its suggestions.
Yet as black box AI proliferates in higher education, explainability has emerged not just as a sound philosophical approach to institutional AI usage. It’s increasingly becoming a legal mandate as well. The EU’s 2024 AI Act requires transparency, risk assessment and human oversight for high-risk AI applications. In the United States, seven states and New York City have recently enacted or proposed new laws that regulate AI’s use in consequential decisions that affect employment, finances and education.
Institutions should approach AI with intentionality, prioritizing tools that are transparent, ethical and aligned with their educational goals. Explainability when AI is making consequential decisions should be a non-negotiable criterion, ensuring that decision-making processes are clear and understandable to all stakeholders. AI should enhance the human parts of teaching by automating repetitive tasks, rather than attempting to replace human judgment. Selecting and deploying AI is not just a technological choice but an ethical responsibility. By making such commitments, institutions can harness it to empower instructors and students while upholding the trust and integrity that are the foundation of education.
Institutions should hold themselves accountable for consequential decisions that affect students’ lives and not simply hand them over to machines. The first step in responsible AI use is explaining it clearly, ensuring institutions and students understand its impact.