Generative artificial intelligence (AI) is increasingly being integrated into higher education to address challenges such as personalized learning, operational efficiency, data-driven insights, research and innovation, and accessibility and inclusion. However, integrating AI into higher education raises concerns about its ethical and effective use, including data privacy and security issues around the data input into these AI systems, and the potential for algorithm bias.
As colleges and universities consider these issues, note that “artificial intelligence” generally encompasses a broad range of technologies and systems designed to perform tasks that typically require human intelligence, such as learning, reasoning, and problem-solving; AI systems like this are not new or uncommon. Generative AI technology is specifically designed to generate new data or content that mimics the characteristics of the training data. It uses deep learning techniques to create original and innovative outputs, such as generating images, audio, or text.
More from UB: Some compelling reasons for electrifying your transportation fleet
To help alleviate some of the concerns above and tackle this new era of generative AI, colleges and universities may wish to consider implementing an AI governance program.
Guidelines on AI use
There are three main factors when considering new guidelines on AI use:
- Consider the balance between providing students with real-world experience and promoting independent thinking. Providing training and experience with using AI systems will help to alleviate concerns of feeling unprepared to enter the workforce. Yet, students also need to learn material on their own without completely relying on AI.
- Determine and clearly articulate the threshold between acceptable and unacceptable use of AI systems. Defining the threshold would include identifying what types of AI systems are permitted, under what circumstances AI can be used, and how attribution to AI should be given. For example, students may be told that AI can be used for brainstorming, drafting, and editing with prior professor permission or alternatively, that AI cannot be used at all.
- Consider policies that are distinct for each college or major and tailored to the individual needs of those students.
A complete AI governance program should also address faculty and staff usage of AI. These uses can be broken down into three broad categories: administrative tasks, such as sending emails and hiring; teaching, such as drafting exam questions; and research. For each of these tasks, the Governance Program should require that any AI usage supplements but does not replace a human’s role. For example, the use of AI should be transparent so that the quality and accuracy of any outputs can be verified. Recognizing that AI models may be biased and/or incomplete is vital.
The components of an AI governance program
An AI governance program for a university should encompass several key aspects for ethical, effective, and secure use of AI technologies. Here are the primary components:
- Ethical Framework and Accountability (i.e., establishing roles and responsibilities for AI technology deployment, training employees, professors, and students).
- Data Governance and Privacy (i.e., data quality and accountability by understanding data lineage)
- Risk Management (i.e., assess and control AI-related risks including both biases and ethical risks)
- Security and Safety (i.e., implement technical safeguards to protect AI systems from cyber threats and establish incident response plan)
- Transparency and Communication (i.e., develop communication channels for feedback and complaints related to AI use)
- Continuous Monitoring and Improvement (i.e., monitor AI technologies for ethical compliance and performance, and update or modify as necessary)
- Education and Training (i.e., provide education and training to build awareness and knowledge about appropriate use of AI systems)
By integrating these components, a university can develop a robust AI governance program that aligns with ethical standards, ensures data protection, and fosters a safe and inclusive educational environment. As we know, AI is here to stay. Creating an effective AI compliance and governance program will provide clear guidelines for universities to handle the changing environment and also assist students (as well as professors and staff) to learn and master the changing technology.