AI innovations: How to balance the legal and governance considerations

AI raises concerns about data privacy, students and employees being subject to discrimination or bias, and uncertain legal and regulatory environments for colleges and universities.
Beth Burgin Waller and Patrick Austin
Beth Burgin Waller and Patrick Austin
Beth Burgin Waller is a principal and chair of the Cybersecurity & Data Privacy practice at Woods Rogers in Virginia, and Patrick J. Austin is of counsel. They advise some of the nation’s leading colleges and universities on cybersecurity planning as well as in the days and weeks following incidents. They may be reached at [email protected] and [email protected].

The proliferation of AI innovations in higher education presents significant benefits for improved efficiency and enhanced learning opportunities, such as improved experiential learning, personalized assistance or tutoring for students and making complex subjects more accessible, among other advantages.

However, with reward comes risk. AI raises concerns about data privacy, students and employees being subject to discrimination or bias, and uncertain legal and regulatory environments for colleges and universities.

AI’s evolving legal and regulatory environment

The legal and regulatory environment surrounding AI in the United States is unsettled and ever-evolving. At the federal level, Congressional leaders have yet to coalesce around a viable piece of AI legislation that would have a realistic chance of passing both the House and Senate. As a result, the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence remains the most coherent guidance on the utilization of AI by the federal government. The AI executive order directs federal agencies to develop standards and technical guidelines impacting the use of AI in various sectors of the economy.

There has been more legislative activity at the state level. For example, Colorado passed the first piece of comprehensive legislation in the U.S. to address the utilization of “high-risk artificial intelligence systems.” The Colorado law—which goes into effect on Feb. 1, 2026—requires AI developers and entities deploying high-risk AI systems to use “reasonable care” to prevent algorithmic discrimination. The law largely avoids regulating the use of AI systems not considered “high-risk.”

In contrast to the Colorado model, other states have enacted legislation targeting specific aspects or uses of AI. For example, multiple states—including California, Florida, Michigan, Minnesota, Texas and Washington—passed laws regulating the use of generative AI in political advertising. Other states—including Illinois and Maryland—enacted legislation to ensure that individuals know when and how an AI system is being used.

Whether states embrace a comprehensive or piecemeal AI legislation model remains an open question.

Importance of AI governance

For colleges and universities to take full advantage of the opportunities presented by AI while mitigating the inherent risks, proactive steps need to be taken to develop and implement an AI governance framework. For context, an AI governance framework is a structured set of policies, standards and best practices designed to regulate and govern AI technologies’ utilization, development and application. An effective AI governance framework can be a proverbial guidepost to ensure AI systems are used ethically, responsibly, and by applicable legal standards.

There are numerous governance frameworks for colleges and universities to consider.

For example, the National Institute of Standards and Technology (NIST) released an AI Risk Management Framework to help manage risks to individuals, organizations, and society associated with artificial intelligence. NIST’s framework focuses on strategies for incorporating trustworthiness considerations into designing, developing, using, and evaluating AI products, services, and systems.


More from UB: How ‘the gift of time’ is transforming Trinity Christian College 


Another notable AI governance framework is the “AI Principles” set forth by the Organization for Economic Co-operation and Development (OECD). The OECD’s AI Principles emphasize responsible stewardship of trustworthy AI, including transparency, fairness and accountability in AI systems.

No matter the framework a college or university chooses to adopt, certain principles can help in managing and navigating the use of AI technologies, including:

  • Accountability
  • Bias control
  • Privacy protection
  • Safety and security
  • Transparency

Looking ahead

In the future, the legal and regulatory environment around AI is likely to remain unsettled with an ever-expanding patchwork of state-level AI laws and executive orders. Colleges and universities must be proactive in monitoring the AI landscape and developing an AI governance framework that addresses how AI can be utilized in admissions, hiring, recruitment, and the classroom.

Most Popular