Artificial intelligence is finding evermore creative ways to interweave within our everyday lives, and it’s no different in higher education. When OpenAI released ChatGPT in November, administrators clamored to adapt curriculum around AI-powered students. Little did they realize that college professors are among the most prominent professions affected by AI language modeling.
As quickly as artificial intelligence models develop, so, too, their impact across different facets of higher education. It may be dizzying, but here are some of the most prominent ways AI affects your school.
Artificial intelligence is poised to streamline the workload of both the applying student and the receiving admissions officer.
Students today can ask ChatGPT to create a 500-word response to an open prompt that they’d otherwise feel paralyzed to complete themselves. They can direct the bot to write a dramatic story about an adolescent overcoming a significant life event that includes references to a city of the student’s liking. Admissions officers already struggle to detect college applications’ authenticity, and the prevalence of AI language modeling will make plagiarism that much more difficult. While new software aims to combat applications littered with AI, some leaders believe the next step forward is introducing video prompts instead.
However, AI technology might be an antidote to the increasing workload and turnover rate for admissions officers. Colleges have begun employing technology that can sift through student transcripts and create preliminary assessments on students’ acceptance likelihood. Allowing software such as Student Select or Sia to do the legwork of review helps officers manage their time and compartmentalize their priorities. Colleges to embrace AI software in admissions include Rutgers, Rocky Mountain College and Maryville University.
The education industry experienced a 576% increase in phishing attacks in 2022, according to recent Zscaler research. While phishing attempts could once be easily detected by grammatical and spelling mistakes and an awkward tone, communication written by ChatGPT appears more natural, and by extension, easier to trust.
Additionally, hackers are finding ways to leverage ChatGPT’s coding capabilities to hack security systems, tricking the AI into creating malware strains. However, just as bad actors are using the emerging technology maliciously, cybersecurity teams can use AI to test their defenses faster.
Not only can ChatGPT ace the SAT and AP exams, but it’s also stunning scholars in its performance on licensing exams. It passed the Uniform Bar Examination by a “significant margin,” approaching the 90th percentile of test-takers. Additionally, ChatGPT passed three exams associated with the United States Medical Licensing Exam with a 60% accuracy rate. GPT-4, on the other hand, answered medical licensing exam questions with a 90% accuracy rate. “I’m stunned to say: better than many doctors I’ve observed,” said Dr. Isaac Kohane, the test administrator, according to Business Insider.
The accuracy of ChatGPT is prompting professionals to explore how students can use the software to augment their work. The America Medical Association’s medical education innovation unit has begun exploring some foundational AI modules, and it is also collaborating with the National Academy of Medicine to host a workshop on AI in health professions education this spring.
Allowing ChatGPT to do the legwork of writing preliminary drafts frees up time for professors to judge the content of a student’s work by their content and ideas rather than by their ability to communicate via proper grammar, style and structure.
“I’m changing all of my assignments to involve more high-level concepts and more integrative knowledge,” said Adam Purtee, an assistant professor of computer science, according to the University of Rochester.
It is heightening the level of work students can do, but it’s also streamlining professors’ administrative responsibilities.
“ChatGPT can be used to help professors generate syllabi or to recommend readings that are relevant to a given topic,” said Manav Raj, co-author of the study that discovered college professors’ high exposure to AI language modeling.