Here is why generative AI bans will not work

One of the guiding principles for the use of AI in education is to make sure that active learning is not sacrificed.
Roee Barak
Roee Barakhttps://www.upword.ai/
Roee Barak is the founder and CEO of Upword.

This may finally be the academic year that universities crack down on students’ use of generative AI to write term papers and complete other projects. A fifth of US universities already ban AI, and with OpenAI and Google considering releasing to the public tools that can quickly detect if a paper was written by their AI, universities could better enforce these bans.

But bans are not the way to go. Neither is allowing students to passively let generative AI produce their assignments, as nearly half of all students have done to some extent, data shows. With AI a powerful force that is already changing the world, academia, too, should embrace it—but in a way that takes advantage of innovation while staying true to the values of education.

The best—and probably only—approach is going to be developing policies where students are allowed to use AI tools to do their research, while ensuring that they continue to synthesize information and learn during the process, rather than simply cheat on their assignments. Policies also need to make sure that students use AI without engaging in plagiarism, and with the right level of supervision, control and interaction to ensure the accuracy and quality of their work.


President moves: This state flagship finds its next president amid heap of retirements


The key to developing those policies is to look at AI as another technology tool, akin to the smartphone, the camera, the laptop computer and even the pocket calculator—all of which generated controversy on how they might be used when they were first introduced. Indeed, math teachers decades ago held demonstrations against allowing the use of calculators by students, claiming that students’ math capabilities would suffer.

Data shows, however, that math scores for elementary and middle school students have remained consistent since the 1980s. That technological innovation didn’t harm students’ academic abilities, and neither did the laptop or the smartphone—and that precedent is likely to prevail for AI tool usage as well.

Active learning is still essential

One of the guiding principles for the use of AI in education—like for these previous tools— is to make sure that active learning is not sacrificed. Active learning, in which students engage in their work by thinking, discussing, creating and solving problems, has been shown to increase students’ ability to learn and retain information, ideas and skills. Active learning strategies include, for example, role play, simulations, debates, and problem-based learning, among others.

When students prompt a generative AI tool to write their essay for them, active learning is sacrificed, as there is little intellectual, creative or emotional interaction on the part of the students. But some AI tools, used in the right way, can allow for, and even enhance, the process of active learning, according to professor Stephen Kosslyn, a neuroscientist and world expert on active learning who is chief academic officer at Minerva University and has worked at top universities including Stanford and Harvard.

For example, when AI tools do not simply spit out an essay, but provide information that students can analyze, organize and make notes on, then put together their own essay, there is room for active learning. By saving time combing through books, and quickly delivering the most relevant information, AI tools can give students more time to dedicate to actively developing and engaging with ideas—rather than passively accepting them. In addition, AI-powered augmented and virtual reality applications can simulate role playing and debating.

Of course, this only works when students are the bosses of AI—using it to produce their own work, instead of allowing it to do the work for them. To ensure that this is the case, professors could issue guidance on how to include AI tools in students’ presentations, just as they provide guidance on using other research tools.

That guidance could include specific steps in a project where AI can and should be used—the data-gathering phase, the fact-checking phase and even the initial outline phase—but ban its use on the idea development and writing process stages.

Used in this way, AI tools become a boon to students, enabling them to get their work done more efficiently, with greater access to large amounts of data, while ensuring that their work is original and not plagiarized or manufactured by an application. Such use of AI tools is likely to be a model that students will be following in the future—and training to work in this manner while in school will enable them to develop good, lifelong habits on how to use technology to advance their own ideas and work.

What new tools does AI need?

As more universities develop such policies, AI platforms will ultimately respond, building in the features that allow the human to be the boss. AI companies are clearly coming to terms with the fact that universities are moving toward more control over AI. This trend was part of what was behind the recent news that both OpenAI and Google have developed watermarking tools that educators could use to detect students’ use of their AI platforms.

But companies, like universities, also need to think not about simply detecting the use of AI in order to enforce bans, but about designing AI platforms that promote active learning, without leading students to cheat or plagiarize. Features designed for students could include tools to help with note-taking, automated footnote or reference generation and the ability to review the material at each step of the research and writing process, including evaluating the sources used.

Platforms could also include an “originality meter,” with the application comparing work to its large database of similar projects to alert students—or faculty—of a potential plagiarism problem, or a feature that will warn students that using the application to generate a finished product is likely to be “harmful to their student health.”

AI tools are disruptive—and as traditional institutions with long-standing traditions and a long historic legacy, it’s understandable that universities would seek to be cautious when it comes to adopting new technology. But innovation is what universities are supposed to be all about—and by developing the right policies on how to utilize AI tools, universities can ensure that their students are able to learn the skills they will need for the future—and ensure that they use these tools properly in the future as well.

Most Popular