- Advertisement -

Safe and ethical AI implementation in higher education

Date:

Share post:

Christian Pantel
Christian Pantel
Christian Patel is the chief product officer at D2L. He leads product strategy, product management, product design, user experience research and accessibility.

A recent survey from the American Association of Colleges & Universities and Elon University’s Imagining the Digital Future Center reveals that more than half of executive higher education leaders haven’t harnessed the full power of generative AI – and 57% reported their spring 2024 graduates weren’t prepared to enter a workforce that leverages AI. A survey commissioned by D2L also found that 43% of workers worry about being replaced by someone with stronger generative AI skills.

These surveys reveal a critical gap in higher education between learners and educators adopting AI, leaving some graduates ill-equipped for the workplace – and still anxious when they get there. While students are eager to apply AI to their learning experience, educators grapple with the role it can play in the classroom while ensuring it is harnessed responsibly, ethically and securely. That has to change.


More from UB: New IT strategies for solving higher ed’s biggest challenges


By understanding how to implement generative AI in a secure and ethical way, we can strive to close this divide between students and educators and prepare learners to harness AI in a manner that amplifies their abilities. It’s important that new grads are launched into the workforce with future-ready AI skills– prepared for, not afraid of, the ever-changing technological landscape they’ll encounter. The good news is that when AI is embraced and harnessed properly, it has the power to produce more creative, efficient and accessible outputs without sacrificing trust, transparency or privacy.

Transparent AI practices

Transparency is a fundamental principal to ethical AI implementation and use. The design and operation of AI systems and tools should be straightforward and intuitive to all learners, educators and administrators – regardless of familiarity or skill level.

The data that’s used to train generative AI tools should be communicated to all stakeholders and the intended use should be clearly defined and disclosed. To ensure it aligns with your institution’s values and policies, human reviews and corrections of all outputs should be conducted routinely.

Secure infrastructure

Security should be considered above all else. AI systems operate with a vast amount of data – data used for training and data collected from every interaction – therefore robust security measures are paramount and needed to safeguard and protect against cyber threats.

The Government of Canada’s 2025-2026 National Cyber Threat Assessment reported that, worldwide, they’ve seen a 283% increase in harmful generative AI incidents reported since 2022. It’s essential that data privacy is maintained to protect against unauthorized access, breaches and misuse.

To protect against harm, it’s important that regular testing practices are conducted to verify systems function properly and mitigate user risks.

Defined data use

It’s important that institutions establish clear policies and practices for collecting, storing and using data to make sure it’s handled responsibly and ethically. These policies should be clearly defined and accessible so stakeholders know exactly how the data will be used, who will have access and under what conditions.

Before data is used, consent should be obtained and individuals should have the right to access, correct or delete their data. To avoid perpetuating any biases, data used for training AI systems should be carefully selected and analyzed.

Enhancing–not replacing–human connections

AI should be used to complement and enhance human interactions, not replace them. In any case where AI is applied to the learning experience, the human element should remain central – preserved and enriched by AI. Using AI doesn’t eliminate the need for an educator’s guidance and support. Educators still play a key role in guiding, mentoring and interacting with learners. AI should be used in tandem as a tool to augment learners’ unique capabilities.

Providing educators, students and staff with training and resources to understand the policies and practices for ethical AI use can help institutions leverage generative AI responsibly and equip students to enter the workforce as ethical, future-ready employees.

With so much uncertainty about how AI is going to be used in the future of work and learning, it’s important to close the gap in higher education AI adoption. As AI continues to evolve and becomes more common in the workplace, institutions need to evolve with it and continue teaching students how to adapt and harness tools that can equip them for success after graduation.

Related Articles